Become a Kibana Search Expert - Part 1

Sharing buttons:

okay I think we're ready to get started

thank you very much for joining today's

logs AIA webinar which is the first of a

two-part series on Cabana search the

customer success organization is running

today's webinar and as our name implies

we're here to help you be successful

with logs i/o and ensure that you're

getting the most value out of the

technology so if you have any questions

around support or training technical

questions or concerns please don't

hesitate to reach out to us you can

always contact us via the intercom chat

bubble that appears in the lower right

hand area of the logs UI or send an

email to health at logs IO we also do

offer some self support resources at

support logs IO before we kick off the

webinar I do want to announce the launch

of logs IO Academy which we're very

excited about if you're left wanting

more following webinar you've additional

questions or concerns we would truly

recommend that you check this out it's a

self-paced online learning platform that

contains tutorials recordings of

webinars like these ebooks how-to guides

and much more if you're already a

subscriber to logs IO the logs IO

Academy is actually included in your

subscription so if you want to check it

out please visit a catalog higher I like

to start off with introductions and I'll

begin with myself my name is Mike

Neville O'Neill I'm a senior customer

success engineer at logs i/o I'm joined

today by our VP of customer success

though as our bill as well as a customer

success engineer Eric alfono who will be

recording your questions as you submit

them in the chat so if anything comes up

in the presentation which I'll they

piques your interest you have any

questions please feel free to submit

those questions and we'll address them

at the end I'm also joined by two of our

customers today anton the vp of

engineering at snicky Oh

as well as Alan who is the platform

operations director at sites Bank so

before we move on if you gentlemen

wouldn't mind introducing yourself and

Anton how do we begin with you thanks

Mike so let me just toggle the camera

right here people to match my face to


they hatch out in the presentation hi

everyone my name is Anton we're a

longtime customer of logs I oh here and

later in today's presentation I will

show how we use log 0 to keep our system

observable and know what's going on in

our production environment

thanks anytime Alan good morning

everyone I'm now the director of

platform operations at site Specht we're

a large-volume web traffic business that

is entirely log driven we're also a

longtime customer of log Z and I will

also be talking today about how many

whether it's log Z search to do

troubleshooting and manager platform

great thanks to you both

before we move on into the remainder of

the webinar I'd like to launch the first

of four polls so today's session will be

relatively interactive and we'd love to

get a sense of how you would rate your

proficiency with a Obama search today so

please feel free to select one of these

options and submit once we're around the

70 percent mark we'll be ready to go ok

so we've got just a few more responses

coming in about 85 percent of

participation so it looks like you folks

are lively and wide awake today yeah

well probably close out the poll and

let's share those results so it looks

like the majority of attendees today are

rating themselves as beginners I think

you're going to get a lot of value out

of this session we're really structuring

around sort of the basic search

strategies and some tips and tricks to

ensure that you're getting the most out

of kabah which I'll run through and also

anti now will share their tips and

tricks as well so the agenda for today

is I want to begin by talking a little

bit about some steps you can take when

you're first shipping your logs into

logs IO and set yourself up for success

and then once you've got that data in


run through some strategies you can take

to interrogating your data so that you

can understand what's going on in your

environment with your business and then

we'll conclude with a conversation with

Alan and ant I will talk about their own

experiences with Tobias search and how

they leverage it in the day-to-day

operations of their business so let's

talk about how you can set yourself up

for success with respect to logging one

of the first things that we recommend

that you do is ensure that your data is

parsed well what does that mean the

process of parsing is really just

teaching logs i/o about how to interpret

the data or logs that you send to it so

that they're easier to work with to

search against to visualize and create

alerts against so ultimately how can you

parse your data well there's a few

different ways that you can accomplish

that the first of which is to send us

your data as JSON logs IO has the

ability to automatically parse and

identify the key value pair

relationships that are the present in

JSON so any data that we receive in that

format is automatically going to be

understood or parsed properly one of the

advantages of sending us your logs as

JSON is that if you make any changes to

the way that your applications or

services log in the future you don't

need to update anything in logs IO it's

going to automatically detect those

changes and parse your logs accordingly

so if you do have any control over the

way your applications or services or

systems are logging we would strongly

recommend that you log in JSON we know

that not necessarily everyone has the

ability to control the way their

applications services or systems are

logging and if that's the case you can

take a couple of different approaches to

getting your log starts the first of

which is via log types which are

predefined parsers that logs i/o has

created for the most common log types

for example Apache logs and gen-x logs

my sequel logs AWS logs and many more as

long as your logs contain a type field

and a value for a parser that we've

built out logs io will do the rest for

you the last option is parsing as a

service where you have the ability to

actually create your own custom parsers

within logs IO as you can see in the

screenshot over here on the right

all you need to do is mouse over the log

of shipping tile and then choose data

parsing and you'll have the ability to

create your own custom drop filters

which can be applied against your data

but not everyone is necessarily a grok

aficionado or has the time or patience

to create and apply those filters and we

have an option for you in that case as

well where our support team is more than

happy to partner with you to create

custom filters or parsers as needed and

they can build those out based on some

sample logs and other information that

you'll need to provide so if you're

struggling to get your data parsed

please don't suffer in silence reject

our support team they're your day'll so

next let's talk about the three core

search types that are available in

Kabana free text field level and filters

i'll start with free text searching

because I suspect that's what most

people start out with when they first

begin to use logs IO or Cabana search

it's a very simple google like

experience where you'll enter in a

series of search terms and then execute

the query for example as you see here

the quick brown fox jumps over the lazy

dog we're not using any operators or any

special characters it's just a pretty

much straight to-the-point query the

second type of searching is more

targeted and that is field level queries

unlike the free text searches which will

be run against each and every field in

your logs field levels allow you to

target specific fields in your data and

then search for particular values of

those fields so for example if you had a

log or a data set with a field value of

Fox and dog and you want to search for

the quick fox lazy dog you can see how

that curry would actually look in kabab

lastly we have filter based searching

which is kind of a visual way in which

you can interact with your data where

you can create conditional filters based

on the fields in your locks you can

actually see in the screenshot to the

right that I've set something up in

advance where we're looking at Apache


and I've created a filter against the

response field for any event where the

value is equal to 400 so what I want to

dive into next is how you can actually

use these circuits in action against

your data in the logs IO technology but

before I do that I do want to launch

just a quick poll to get a sense of the

types of search strategies that you guys

are employing today in the audience you

can choose all apply maybe it's a mix of

the three but definitely interested to

see how you're working with your data

today and again we'll wait till you get

about 70% or so response rate before you

move forward looks good let's go ahead

and close that pull out I'll share out

those results with you so it looks like

it's a pretty good mix where folks are

leveraging predominantly free text and

field level with a little bit of filter

based grading as well which is good to

see that you're ultimately taking the

advantage of the various types of search

that are available and I hope that I'm

able to show you some new tips and

tricks within each one of these

particular search types so it would like

to do next is actually jump into blog

Ziya and we'll start to run some

searches against our data for you into

the searching I do at least want to

acquaint you a bit with the interface

and I'll start with this link over here

in the query bar the users Lucene query

syntax link I think this is relatively

easy to miss but it's actually quite

helpful where it's a direct link to the

elasticsearch documentation around its

query syntax so if you're stuck trying

to formulate a query or maybe you're

looking for some inspiration on how you

can search your data this is an

excellent resource in addition to you of

course our support team is available

24/7 to help you with any questions

you'll also notice to give a histogram

running across the center of the page

above the document table where your log

amounts are listed in this histogram

lists the number of log events or

documents that have been adjusted by

logs IO over a given time interval in

this case it's going to be 24 hours

which we've selected via the time picker

you can expand that out and use a number

of different increments or intervals

against which you'd like to search

you'll also see when I across the left

hand side of the interface that we have

a number of available fields and this is

a listing of the fields that logs IO has

identified and parse tree logs you've

sent thus far the top 500 documents that

we're seeing and we're going to work

with this quite a bit later on for those

of you who have multiple sub accounts

you also the ability to select which

accounts you are pureeing against this

probably won't apply to all of you but

for those of you who are using sub

accounts with logs IO you do have the

ability to search those sub accounts

on-demand and that's a feature we rolled

out relatively recently so now you're

familiar with the interface at a high

level let's start running some searches

so I'll begin with the free text search

and let's say that you want to get some

general visibility into the errors that

are occurring retirement over the last

24 hours or so you can go ahead and run

a very basic search for error and you'll

see that but we need a number of

different results returned regardless of

case for example you'll see here that

the value for this level field is error

in all caps but we're also getting hits

on lowercase error and multiple

variations so the takeaway there is that

when you run a free text search it isn't

case sensitive and you're going to get

all sorts of different results back when

you run those surgeons so this is great

for getting some broad visibility into

the errors in my environment but let's

say that you wanted to get more specific

maybe there's a specific here turtle

down so I want to hear a whirring

database in this case and you'll notice

something a bit peculiar that I've

entered an additional search terms but

the number of hits that we've got and

didn't decrease it's the same number as

when I was running that error search so

what exactly is going on here I think to

understand this behavior it makes sense

to talk a little bit about analyzed

fields and tokenization in Kabyle so an

analyzed field is a field of that Kabana

is going to apply and analyzer to all

its really doing is analyzing the stream

of characters and breaking those out

into tokens using separators or limiters

that's the process of tokenization in

logs i/o only the message field is

analyzed so what does this actually mean

is that typically what logs i/o is doing

is breaking out separate words or tokens

and then when you run searches it's

going to be running a search against

really all so the career that I've just

run a recruiting database well it's

functionally equivalent to joining all

of these search terms with world raters

if you're not familiar with boolean

operators what they function as are ways

to connect and define the relationship

between or among search terms so we're

looking for an event that could contain

error or query or database so that's not

terribly helpful in terms of identifying

the subset of events that we're actually

interested in

like most things in life with cabana it

pays to be specific so we're going to do

is wrap those terms in quotes and then

what you'll see is only the subset of

events that we're actually interested in

seeing the specific areas relating to

you creating the database so that's one

way that you can potentially get more

specific and construct a more targeted

search when you're working with your

data but you can get even more targeted

by working with the fields themselves as

many of you are today in this case let's

say that I want to work with my Apache

access logs in particular we'll look for

a good example here and you can see that

I have a value of type for this event or

there a field of type and a value of

Apache access so we'll grab that that's

how well begin search so as you'd expect

we're gonna get our access logs returned

there no real surprises but a potential

gotcha here is that unlike the free text

searches the field text searching is a

very case sensitive where if I make any

changes to the way that I spelled the

value for that field or the field name

itself no results will be returned so an

important takeaway here is that when

you're running a field based search is

you need to ensure that both the field

name and its value appear exactly the

way that they do in your logs otherwise

you won't return the relevant results

now it's also the case that we can join

multiple search terms using boolean

operators as I already have maybe in

this case we want to see all the

requests that are hitting our web server

coming for a particular client IP again

you can see the field named present in

the document so we'll go ahead and add

that a join it with an and operator

because we still want to make sure that

we're including our actual logs only go

ahead and execute that query and we've

further narrowing down our results and

we can exclude results as well using the

not operator for example let's say that

you wanted to exclude certain requests

search it request types from certain

browsers in this case Firefox I'll cover

that again in case I moved to

again we're looking for the field that

has a name of name and a value of

Firefox so I'll go ahead and exclude

that now I've written out the boolean

operators longhand try one of the things

that's both very nice about Gabbana but

also can occasionally be a bit

challenging is there's almost always

more than one way to accomplish your

goal so it also uses some shorthand in

this case where I'll put an exclamation

point before the field name to indicate

that it should not be equal to Firefox

and the results will be a time long so

it might not be the case that you always

want to take the time to construct a

query you might want to navigate your

data in a more visual way or maybe you

want to set up some simple filters not

hunt before you make a more complex

query and the way to accomplish that in

logs IO is via the use of filter based

searching and you can add filters to

your searches in three different ways

you can add them via the available

fields listing you see on the left-hand

side of the page you can drill down into

your documents themselves add fields to

your search and you can also add those

filters manually using the add a filter

button you see you running across this

toolbar so I'll start by adding a filter

at the log or document level and again

I'll expand the event using arrow next

to the timestamp and then we'll filter

by type which i think is great use for

this filter that's really easy way to

ensure that you're looking exactly the

subset of data that you want you now

notice next to each one of these values

for the fields we have four different

buttons which one of these buttons

represents a different type of filter

that you can create this asterisk here

allows you to filter on whether that

field is present on any data that you're

searching against the table button here

actually gives you the ability to add

the value for this field as a column in

this document table you see when I

clicked on that now we can see at a

glance the blog type for all the

relevant events and then you also have

to create positive or negative filters

as well so we can exclude all of our

access logs or include them which is

what I'll be doing in this case once

you've added that filter it's going to

appear on this list here in the filter

bar and when we mouse over it you're

going to be presented with a few

different options and this case is going

to choose to edit it and you can provide

a label which is just a friendly name or

some arbitrary text that you can enter

to make it easier to figure out what's

going on with your search or could make

it easier for your colleagues to figure

out what's going on with your search

should you choose to share so we'll just

call this Apache logs so that's one step

that we're taking to get a subset of our

data a little bit more granular but I'd

like to introduce some other filters as

well I'll do that via available fields

in this case so let's use that browser

example again and take a look at the

name field you'll notice when I click on

that name field we get some nice

expansion as well as a grouping of the

value for that field for the last 500

records or so and you can create

positive or negative filters here just

as we did when I was looking at the log

to the documents so if I wanted to

include our requests from Internet

Explorer I'll click that plus sign

magnifying glass and again we're going

to see that filter appear here in the

filter bar lastly we can set up a filter

manually via this pad a filter button or

the first thing that you'll need to do

here is specify a field where in some

auto population here based on the fields

that are present in this data but

there's also going to be some auto

completion for you as well so if you

have a vague idea of what you're looking

for you can enter in some terms and it's

going to recur in the relevant fields in

this case I want to filter down by

country name once you choose the field

that you want to filter against the next

step is to specify the operator to be

used so if we started with is all we're

looking for here or events with the

exact value for this particular field

this works just the way that it does

when you're searching directly via the

gray bar so you want to ensure that the

value that you place here is identical

to the way that it appears in your data

we can also exclude terms or records

using is not and you can specify

multiple values

he is one of war is not so if we wanted

to include our requests only from

Ireland or sale you can go ahead and

type those in hit enter and run the

filter against multiple terms and the

exists and does not exist filter work

pretty much how you would suspect where

we're going to be including events where

the field is present or excluding fields

where it is not so in this case let's

say that I will say that we're looking

for any records that are not from the

United States so here we have our two

kind of positive filters and a negative

filter and something that's important to

keep in mind is that filters are always

additive and so what that means is you

can think of every filter that you add

is being joined by an and operator it's

not the case that you can use or

operators with filters that's really

what the query bar is all 24 so we've

covered three different types of search

strategies today free text searching

field level searching as well as filter

based searching and as you guys have

probably already realized it's not the

case that you need to use these

separately in fact as a best practice in

the long Xia we strongly recommend that

you combine both filter based searching

and field based searching for best

results where in this case if I wanted

to recreate the search that we just ran

using the query log language or the kree

bar and go ahead and accomplish that by

saying that the country name should not

be equal to the United States and we'll

get the same results so I hope that that

was a helpful introduction to some of

the basic search strategies that you can

employ in qivana

I think for now I've probably done

enough talking for today so I do want to

turn things over to our guests Anton and

Alan but before I do that I do want to

launch another poll and what we're

interested in seeing is

are you parsing your logs today it could

be that you're sending your logs in as

JSON maybe you've created custom drop

filters don't work with our support team

to accomplish that however you did we're

interested to know whether you've

already got your data parsed into fields

that are easy for you to work with

closing in on the 70% mark glad to see I

haven't put anybody to sleep yet good

sign okay I think that's good enough

close that poll out and share the

results so the majority of you are

parsing your logs today which is

fantastic it means you're in a very good

position to get the most out of logs i/o

so it's great to see that for those of

you who haven't had the opportunity to

get your logs parsed we would strongly

encourage you to reach out to our

support team it's what we're here for

it's one of the things that work best at

its included in the service if you are a

subscriber or even if you're not so

please don't hesitate to reach out well

you weren't happy to help you get the

most out of lifestyle so I think at this

point it makes sense for me to turn it

over to you hard guests I think we'll

begin with Anton if that makes sense for

you I'll go ahead and turn over

presenter rights and we can get started

absolutely thanks Mike okay very good

so I'm now showing my screen but before

I dive into our use of logs AOL

explained briefly about our environment

so let's sneak we run a kubernetes

cluster with about 30 micro services a

few hundreds of hundreds of different

containers running on that cluster all

generating very detailed application

logs as a single line Jason

pushed with a basic fluent D container

all the way to log zero to be parsed and

await our our eyes should anything go

wrong or should we have any question

about how our system operates in

production what I want to show you now

is a set of our staging environment logs

being collected 24/7 from the kubernetes

staging cluster that we have all our

services are running on it and pushing

their logs through so one interesting

thing to note even before we begin is

that this type of log pushing is

generating a lot of kubernetes related

metadata that allows us to distinguish

between the service on which the log

originated very helpful when everything

is flowing into the same place and then

we need to break things apart and

understand what happened we're having

the container name or the name of the

pod as parsed keys on all of our logs

allow us very simple lookups

very similar to what was demonstrated by

Mike beforehand I will just go ahead and

use the kubernetes container name key is

as a key that is being shown here

against each log entry and I'm seeing

all the different types of services

running on our cluster knowing where

each log line has originated using the

plus and minus selectors here I can

filter for only logs originating on a

specific service or perhaps excluded

from the search and say that's the least

interesting service for me right now

please don't show me anything from it so

this type of bubble here as Mike said

adds to the filters and the query I'm

selecting the ability to toggle things

as they are selected from just this

value to anything about

this value is also very handy and very

helpful in our searches now that I have

this field selected there is also a

quick count feature of Cubana allowing

me to see the distribution of values for

this specific key and here I can see for

example the level of chattiness of our

services some are more chatty than

others very helpful when we're

experiencing some sort of spike in our

traffic understanding which service is

suffering from this spike is often

highly correlated with a surge in the

amount of logs generated by that

specific service do note that even

though I'm currently looking at over

200,000 search results the quick count

only covers the first 500 so it may not

give a very precise representation of

the distribution of values but it gives

some sign the ability to toggle values

from here plus and minus in this

magnifying glass icon o is just as handy

it's doing so from the table view like


now since the logs are completely within

our control as we develop our code base

we decide what to push to the logs we

find it very handy to have a set of

standard keys that we would use across

our services for different values which

will later allow us to understand what's

going on in the system as a whole the

status key we use to indicate the

response status that the service is now

responding with we're using HTTP codes

so unless it's some internal call here

the status will have a numeric value now

I'm seeing a lot of logs that do not

have any status value and they don't

want to

in a very good way of doing so would be

to search for the status field with an

asterisks meaning any value as long as

the value exists and now I'm twinning

down from the 200,000 results to just a

few thousands 27,000 results now I'll

have some status value so as long as

everything is going okay the status

should be 200 which again if I want to

zoom in on an error that is happening in

our system or any error I can say show

me any status that is not 200 in this

way mm-hm

empty values are of course included so

combine it with any value for status

that is not 200 and I'm getting the

results that I want okay so 404 is here

to a 1 would be here and other codes as

well another very handful and the key

that we use is duration the important

thing here is to have a standard we use

milliseconds but if two different

services reported the duration of their

operations in different units that would

be very confusing so stick to the same

and to the same unit if you decide to

follow this path and here I would

usually want to see something generic

for example fields that have a value for

duration which is larger than 100

milliseconds so just using this greater

than notation as part of the search can

go a long way and suddenly I see that we

have requests going through our system

that take longer than 4 seconds to

complete depending on the type of

request and type of service this can be

alarming or not

another feature which I don't think

we'll be covering this time is logs i/o

alerts which can be triggered by values

out of some norm that you decide on we

use that a lot specifically with

durations and just in the same manner I

can also look at all the fast queries we

process the ones that complete in under

100 milliseconds in case I want to see

where the hot path of the system is and

again correlated back to the services

that contribute to the most fast process

requests that we're handling that's it

on my site unless there are any

questions no that was great thanks very

much Anton I actually learned a few

things myself

I think it probably makes sense to pass

things over to Alan if you wouldn't mind

telling us a little bit about how you

guys are using long as I don't give on a

search absolutely thanks I'm Tommy it's

cool so if everyone can see my screen

I'm just doing a quick search here so

back up before we get into search and

time talk a little bit about his

environment the same so we're a large

large volume web proxy we serve anywhere

from 15 to 25,000 requests per second we

generate an immense amount of logs based

on that we actually ship a sample volume

of logs to Ozzie and I think we still

end up sending you guys somewhere

between 55 and 75 gigs a day unlike

Anton we use final beat to ship our logs

and we while our logs are in JSON format

our logs are fairly complex and we

actually do leverage logs these log

parsing service and they do feel mapping

for us so with field mapping what you

can see here is I've got a field search

going and I'm looking at my company's

website hits for the last 24 hours

quick filter sorry quick time ranges are

really helpful to recommend this highway

it's really easy to toggle between

different time ranges which can help you

start to see trends and patterns

emerging there's also an auto refresh

function on this that is useful when

you're trying to do something that

approximates live tale which I know you

guys support as well so site specs have

become fairly familiar with Lucene query

language and we use we use this a lot

and in our daily lives doing

troubleshooting and just analysis of

what's going on in our platform so here

I'm looking at only successful codes

status codes for our website filtering

now on multiple conditions one thing

I'll call out is extraordinarily

important that your Lucena operators are

always capitalized we cannot stress this

enough the number of times that we've

looked at bad data simply on a lower

case operator and been thrown for a huge

loop is is huge so make sure you

capitalize everything you might have

seen Anton using exclamation points I

think Mike was doing this as well so you

can do n bang so I am NOT status code

you can also use we seem to generate the

same results worth and not or you can

also use the minus sign to exclude a

particular condition from your search so

we'll pick with not status code 200 I'll

let this load and then Anton was talking

about how in the available fields when

you click into one of these you can see

a quick count so it's really important

to note that this is only the top 500

records from the search and while this

distribution does give you a fairly

quick good quick glance at what the

distribution looks like because it is

only those first 500

it may actually be somewhat different

logs he has this really handy visualize

button here from from search you can

click this and this will show you all of

the records that appear in this search

and give you the full histogram

distribution of what the error

conditions are in this case I think it's

really helpful it's really easy to go

back to just the plain search so Anton

also mentioned alerting any of you who

are on the alerting webinar know that

I'm big into alerting logs he has this

great button right here that allows you

to create an alert based off of your

your search very quickly so as you hone

your search as soon as you come to a

query that you want to have actionable

alerting based on logs he has made it

really easy to create that alert and

drive value and actionable information

off of that query the other thing is so

Mike talked a little bit about the open

here we go so the fields here we use

this a lot just to toggle the columns

and to you saw end time building filters

off of this which show up at the top

this is also really helpful and as you

start building filters and creating more

granular searches you can also save your

searches and same searches are really

helpful sites pack uses a ton of these

as well it just - you know repeatable

searches that you may want to add

conditions to so we have a searches for

different types of blogs with different

types of conditions that are saved and

then you can add a host name determinant

to filter based on a particular client

the search interface is just super

flexible and has really helped sites

back in managing our enormous volume of

logs and to understand exactly what's

going on

at any given time either lowly or for a

particular customer great thanks a lot

they appreciate the thorough overview

bye you guys are using logs i/o today

covering some of the ways that folks can

can save time and work with their data

directly and just a heads up they're

still sharing your screen at their

apologies no worries so I was you know

the peril of the live presentation so

we've covered a lot of ground today and

I think it makes sense to move into Q&A

but before that I just want to plug if

you don't mind

the logs i/o Academy once more for those

of you who may have shown up a bit late

and let's see here we did recently

launched the logs i/o Academy which is a

series of stealth paced learning

materials for example tutorials

recordings of webinars like this one

ebooks how-to guides in more and this is

already included with your logs AIA

subscription if you are a subscriber so

there's a lot of great topics that are

covered log shipping parsing searching

visualizations alerts among others so we

definitely recommend you check that out

Academy logs dot IO next let's I know

we've gotten some questions coming in

Eric so maybe it makes sense to start

fielding those now so one of the

questions we have here is can we search

across many set of accounts with one

query answers absolutely so if it didn't

you just a moment I can jump into you

our instance and actually show you what

that looks like where you can see here

we have this selected accounts feature

so I can uncheck this so that you can

get a little more granular but

these are some accounts that are

associated with the main demo account

that I'm currently working with so we

can include or exclude sub accounts

based on the selections that I make me

here so that's a definite yes the next

question is whether we can use wildcards

and if so how

yes you definitely can use wildcards


aunt I actually showed you one way that

you could do that where if you were

specifying a field you can actually use

the star just to verify that there's a

value for it so for instance if we

wanted to go with let's say response so

we're searching on that field and then

verifying that there's something that

exists for it it's also the case that

you can use asterisks within fields

where it's the case that maybe we wanted

to find everything starting with 200 you

have the ability to do that as well and

you can use wildcards in other ways

we're actually be covering wildcards in

more depth as part of the advanced

webinar that we'll be running next month

so thank you for that for that question

and does logs of support regular

expressions yes we do it is the case

that you can leverage regular

expressions both in free text searches

as well as field based queries we're

going to be going into quite a bit of

depth on regular expressions which

libraries being relied on in the

advanced seminar as well the next

question that we've got here is I have a

field that contains numbers

I want a filter that shows all logs that

are less than a constant value when I

try to add a new query filter all I can

see is a gray string option what am i

doing wrong

let's see the logs it would be a hard to

say but I almost would wonder whether it

was a parsing or field mapping issue or

maybe we're mapping out the field that

you're trying to search against as a

string rather than an integer

that's something that I would probably

want to look into the field mapping and

recommend you check out with support I

think we'll do just one more in the

interests of time which is how can you

make the status code 200 green and

otherwise red which I suspect is

something that's being shown

automatically within cabana based on

some presets that we've created but

that's that's something that ketamine

that's something that we're actually

going to be covering in detail in the

logs of Academy thanks to the assist

from my fellow customer success in near

Diago but you know so please do check

out the Academy where we do get into

that in detail of how you can customize

the appearance of those events

so I think that's probably going to do

it on the Q & A side one last poll that

I'd like to ask before we wrap up which

is going to be a very direct poll but

whether you learned anything new about

gabbana search from this webinar an

answer honestly my feelings won't be

hurt but this will be helpful for us in

constructing some of the material for

the advanced webinar and we'll wait till

about 70% come in and you know if the

results are favorable I'll share them if

not I'll hide them so don't you don't

worry okay we'll go ahead and close that

out and get a pretty good response rate

so it looks like most of you got

something out of it

which is great for those of you who

maybe didn't learn anything new this is

a great opportunity for you to tell us

what you want to see in the next webinar

so a next webinar we're going to be

covering regular expressions range

searches wildcards parentheses and how

you can get really dive deep into your

data we do need your help to determine

what's getting this valuable for you to

see so we invite you to send us your

suggestions to help

that I hope I the end of the week or

maybe even a little later to get

yourself a stylish hand free logs IO

t-shirt so again we invite you to go

ahead and send that out to us so that we

can build that into the next webinar I

wanna thank everyone for attending today

I look forward to seeing you in next

month's continuation of this series and

I also want to extend a special thanks

to you our special guests

Anton and Alan for taking time out of

their busy days to talk to you all about

how they're leveraging can - searching

their organizations so thanks again to

everyone and I look forward to seeing

you next month