This is the first in a series of three articles on penetration testing for Web applications. The first installment provides the penetration tester with an overview of Web applications - how they work, how they interact with users, and most importantly how developers can expose data and systems with poorly written and secured Web application front-ends.

Note: It is assumed that the reader of this article has some knowledge of the HTTP protocol - specifically, the format of HTTP GET and POST requests, and the purpose of various header fields. This information is available in RFC2616.

Web applications are becoming more prevalent and increasingly more sophisticated, and as such they are critical to almost all major online businesses. As with most security issues involving client/server communications, Web application vulnerabilities generally stem from improper handling of client requests and/or a lack of input validation checking on the part of the developer.

The very nature of Web applications - their ability to collate, process and disseminate information over the Internet - exposes them in two ways. First and most obviously, they have total exposure by nature of being publicly accessible. This makes security through obscurity impossible and heightens the requirement for hardened code. Second and most critically from a penetration testing perspective, they process data elements from within HTTP requests - a protocol that can employ a myriad of encoding and encapsulation techniques.

Most Web application environments (including ASP and PHP, which will both be used for examples throughout the series), expose these data elements to the developer in a manner that fails to identify how they were captured and hence what kind of validation and sanity checking should apply to them. Because the Web "environment" is so diverse and contains so many forms of programmatic content, input validation and sanity checking is the key to Web applications security. This involves both identifying and enforcing the valid domain of every user-definable data element, as well as a sufficient understanding of the source of all data elements to determine what is potentially user definable.

The Root of the Issue: Input Validation

Input validation issues can be difficult to locate in a large codebase with lots of user interactions, which is the main reason that developers employ penetration testing methodologies to expose these problems. Web applications are, however, not immune to the more traditional forms of attack. Poor authentication mechanisms, logic flaws, unintentional disclosure of content and environment information, and traditional binary application flaws (such as buffer overflows) are rife. When approaching a Web application as a penetration tester, all this must be taken into account, and a methodical process of input/output or "blackbox" testing, in addition to (if possible) code auditing or "whitebox" testing, must be applied.

What exactly is a Web application?

A Web application is an application, generally comprised of a collection of scripts, that reside on a Web server and interact with databases or other sources of dynamic content. They are fast becoming ubiquitous as they allow service providers and their clients to share and manipulate information in an (often) platform-independent manner via the infrastructure of the Internet. Some examples of Web applications include search engines, Webmail, shopping carts and portal systems.

How does it look from the users perspective?

Web applications typically interact with the user via FORM elements and GET or POST variables (even a 'Click Here' button is usually a FORM submission). With GET variables, the inputs to the application can be seen within the URL itself, however with POST requests it is often necessary to study the source of form-input pages (or capture and decode valid requests) in order to determine the users inputs.

An example HTTP request that might be provided to a typical Web application is as follows:

GET /sample.php?var=value&var2=value2 HTTP/1.1 | HTTP-METHOD REQUEST-URI PROTOCOL/VERSION
Session-ID: 361873127da673c | Session-ID Header
Host: www.webserver.com | Host Header
<CR><LF><CR><LF> | Two carriage return line feeds

Every element of this request can potentially be used by the Web application processing the request. The REQUEST-URI identifies the unit of code that will be invoked along with the query string: a separated list of &variable=value pairs defining input parameters. This is the main form of Web applications input. The Session-ID header provides a token identifying the client's established session as a primitive form of authentication. The Host header is used to distinguish between virtual hosts sharing the same IP address and will typically be parsed by the Web server, but is, in theory, within the domain of the Web application.

As a penetration tester you must use all input methods available to you in order to elicit exception conditions from the application. Thus, you cannot be limited to what a browser or automatic tools provide. It is quite simple to script HTTP requests using utilities like curl, or shell scripts using netcat. The process of exhaustive blackbox testing a Web application is one that involves exploring each data element, determining the expected input, manipulating or otherwise corrupting this input, and analysing the output of the application for any unexpected behaviour.

The Information Gathering Phase

Fingerprinting the Web Application Environment

One of the first steps of the penetration test should be to identify the Web application environment, including the scripting language and Web server software in use, and the operating system of the target server. All of these crucial details are simple to obtain from a typical Web application server through the following steps:

  1. Investigate the output from HEAD and OPTIONS http requests

    The header and any page returned from a HEAD or OPTIONS request will usually contain a SERVER: string or similar detailing the Web server software version and possibly the scripting environment or operating system in use.

    OPTIONS / HTTP/1.0

    HTTP/1.1 200 OK
    Server: Microsoft-IIS/5.0
    Date: Wed, 04 Jun 2003 11:02:45 GMT
    MS-Author-Via: DAV
    Content-Length: 0
    Accept-Ranges: none
    DASL: <DAV:sql>
    DAV: 1, 2
    Public: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH
    Allow: OPTIONS, TRACE, GET, HEAD, COPY, PROPFIND, SEARCH, LOCK, UNLOCK
    Cache-Control: private

  2. Investigate the format and wording of 404/other error pages

    Some application environments (such as ColdFusion) have customized and therefore easily recognizable error pages, and will often give away the software versions of the scripting language in use. The tester should deliberately request invalid pages and utilize alternate request methods (POST/PUT/Other) in order to glean this information from the server.

    Below is an example of a ColdFusion 404 error page:

    ColdFusion 404 error page

  3. Test for recognised file types/extensions/directories

    Many Web services (such as Microsoft IIS) will react differently to a request for a known and supported file extension than an unknown extension. The tester should attempt to request common file extensions such as .ASP, .HTM, .PHP, .EXE and watch for any unusual output or error codes.

    GET /blah.idq HTTP/1.0

    HTTP/1.1 200 OK
    Server: Microsoft-IIS/5.0
    Date: Wed, 04 Jun 2003 11:12:24 GMT
    Content-Type: text/html

    <HTML>The IDQ file blah.idq could not be found.

  4. Examine source of available pages

    The source code from the immediately accessible pages of the application front-end may give clues as to the underlying application environment.

    <title>Home Page</title>
    <meta content="Microsoft Visual Studio 7.0" name="GENERATOR">
    <meta content="C#" name="CODE_LANGUAGE">
    <meta content="JavaScript" name="vs_defaultClientScript">

    In this situation, the developer appears to be using MS Visual Studio 7. The underlying environment is likely to be Microsoft IIS 5.0 with .NET framework.

  5. Manipulate inputs in order to elicit a scripting error

    In the example below the most obvious variable (ItemID) has been manipulated to fingerprint the Web application environment:

    ItemID manipulation in a URL

  6. TCP/ICMP and Service Fingerprinting Using traditional fingerprinting tools such as Nmap and Queso, or the more recent application fingerprinting tools Amap and WebServerFP, the penetration tester can gain a more accurate idea of the underlying operating systems and Web application environment than through many other methods. NMAP and Queso examine the nature of the host's TCP/IP implementation to determine the operating system and, in some cases, the kernel version and patch level. Application fingerprinting tools rely on data such as Server HTTP headers to identify the host's application software.

Hidden form elements and source disclosure

In many cases developers require inputs from the client that should be protected from manipulation, such as a user-variable that is dynamically generated and served to the client, and required in subsequent requests. In order to prevent users from seeing and possibly manipulating these inputs, developers use form elements with a HIDDEN tag. Unfortunately, this data is in fact only hidden from view on the rendered version of the page - not within the source.

There have been numerous examples of poorly written ordering systems that would allow users to save a local copy of order confirmation pages, edit HIDDEN variables such as price and delivery costs, and resubmit their request. The Web application would perform no further authentication or cross-checking of form submissions, and the order would be dispatched at a discounted price!

<FORM METHOD="LINK" ACTION="/shop/checkout.htm">
<INPUT TYPE="HIDDEN" name="quoteprice" value="4.25">Quantity: <INPUT TYPE="text"
NAME="totalnum"> <INPUT TYPE="submit" VALUE="Checkout">
</FORM>

This practice is still common on many sites, though to a lesser degree. Typically only non-sensitive information is contained in HIDDEN fields, or the data in these fields is encrypted. Regardless of the sensitivity of these fields, they are still another input to be manipulated by the blackbox penetration tester.

All source pages should be examined (where feasible) to determine if any sensitive or useful information has been inadvertently disclosed by the developer - this may take the form of active content source within HTML, pointers to included or linked scripts and content, or poor file/directory permissions on critical source files. Any referenced executables and scripts should be probed, and if accessible, examined.

Javascript and other client-side code can also provide many clues as to the inner workings of a Web application. This is critical information when blackbox testing. Although the whitebox (or 'code-auditing') tester has access to the application's logic, to the blackbox tester this information is a luxury which can provide for further avenues of attack. For example, take the following chunk of code:

<INPUT TYPE="SUBMIT" onClick="
if (document.forms['product'].elements['quantity'].value >= 255) {
document.forms['product'].elements['quantity'].value='';
alert('Invalid quantity');
return false;
} else {
return true;
}
">

This suggests that the application is trying to protect the form handler from quantity values of 255 of more - the maximum value of a tinyint field in most database systems. It would be trivial to bypass this piece of client-side validation, insert a long integer value into the 'quantity' GET/POST variable and see if this elicits an exception condition from the application.

Determining Authentication Mechanisms

One of the biggest shortcomings of the Web applications environment is its failure to provide a strong authentication mechanism. Of even more concern is the frequent failure of developers to apply what mechanisms are available effectively. It should be explained at this point that the term Web applications environment refers to the set of protocols, languages and formats - HTTP, HTTPS, HTML, CSS, JavaScript, etc. - that are used as a platform for the construction of Web applications. HTTP provides two forms of authentication: Basic and Digest. These are both implemented as a series of HTTP requests and responses, in which the client requests a resource, the server demands authentication and the client repeats the request with authentication credentials. The difference is that Basic authentication is clear text and Digest authentication encrypts the credentials using a nonce (time sensitive hash value) provided by the server as a cryptographic key.

Besides the obvious problem of clear text credentials when using Basic, there is nothing inherently wrong with HTTP authentication, and this clear-text problem be mitigated by using HTTPS. The real problem is twofold. First, since this authentication is applied by the Web server, it is not easily within the control of the Web application without interfacing with the Web server's authentication database. Therefore custom authentication mechanisms are frequently used. These open a veritable Pandora's box of issues in their own right. Second, developers often fail to correctly assess every avenue for accessing a resource and then apply authentication mechanisms accordingly.

Given this, penetration testers should attempt to ascertain both the authentication mechanism that is being used and how this mechanism is being applied to every resource within the Web application. Many Web programming environments offer session capabilities, whereby a user provides a cookie or a Session-ID HTTP header containing a psuedo-unique string identifying their authentication status. This can be vulnerable to attacks such as brute forcing, replay, or re-assembly if the string is simply a hash or concatenated string derived from known elements.

Every attempt should be made to access every resource via every entry point. This will expose problems where a root level resource such as a main menu or portal page requires authentication but the resources it in turn provides access to do not. An example of this is a Web application providing access to various documents as follows. The application requires authentication and then presents a menu of documents the user is authorised to access, each document presented as a link to a resource such as:

http://www.server.com/showdoc.asp?docid=10

Although reaching the menu requires authentication, the showdoc.asp script requires no authentication itself and blindly provides the requested document, allowing an attacker to simply insert the docid GET variable of his desire and retrieve the document. As elementary as it sounds this is a common flaw in the wild.

Conclusions

In this article we have presented the penetration tester with an overview of web applications and how web developers obtain and handle user inputs. We have also shown the importance of fingerprinting the target environment and developing an understanding of the back-end of an application. Equipped with this information, the penetration tester can proceed to targeted vulnerability tests and exploits. The next installment in this series will introduce code and content-manipulation attacks, such as PHP/ASP code injection, SQL injection, Server-Side Includes and Cross-site scripting.

'Hacking' 카테고리의 다른 글

Penetration Testing for Web Applications (Part Three)  (1) 2011.08.31
Penetration Testing for Web Applications (Part Two)  (0) 2011.08.31
tcpdump  (1) 2011.08.31
MS, AES security  (1) 2011.08.26
Malware URL  (0) 2011.08.18
Posted by CEOinIRVINE
l

tcpdump

Hacking 2011. 8. 31. 00:44

Table of contents

IntroductionBACK TO TOC

In this article I would like to talk about one of the most useful tools in my networking toolbox and that is tcpdump. Unfortunately mastering this tool completely is not an easy task. Yet stuff you do the most is relatively simple and may become a good springboard when diving into more complex topics.

tcpdump usesBACK TO TOC

tcpdump is a packet sniffer. It is able to capture traffic that passes through a machine. It operates on a packet level, meaning that it captures the actual packets that fly in and out of your computer. It can save the packets into a file. You can save whole packets or only the headers. Later you can “play” recorded file and apply different filters on the packets, telling tcpdump to ignore packets that you are not interested to see.

Under the hood, tcpdump understands protocols and host names. It will do all in its power to see what host sent each packet and will tell you its name instead of the IP address.

It is exceptionally useful tool for debugging what might have caused certain networking related problem. It is an excellent tool to learn new things.

InvocationBACK TO TOC

Invoking tcpdump is easy. First thing that you have to remember is that you should either be logged in as root or be a sudoer on the computer – sudoer is someone who is entitled to gain administrator rights on computer for short period of time using sudo command.

Running tcpdump without any arguments makes it capture packets on first network interface (excluding lo) and print short description of each packet to output. This may cause a bit of a headache in case you are using network to connect to the machine. If you are connected with SSH or telnet (rlogin?), running tcpdump will produce a line of text for each incoming or outgoing packet. This line of text will cause SSH daemon to send a packet with this line, thus causing tcpdump to produce another line of text. And this will not stop until you do something about it.

Simple filteringBACK TO TOC

So first thing that we will learn about tcpdump is how to filter out SSH and telnet packets. We will study the basics of tcpdump filtering later in this guide, but for now just remember this syntax.

# tcpdump not port 22

“not port 22″ is a filter specification that tells tcpdump to filter out packets with IP source or destination port 22. As you know port 22 is SSH port. Basically, when you tell tcpdump something like this, it will make tcpdump ignore all SSH packets – exactly what we needed.

Telnet on the other hand, uses port 23. So if you are connecting via telnet, you can filter that out with:

# tcpdump not port 23

Clear and simple!

Reading tcpdump‘s outputBACK TO TOC

By default tcpdump produces one line of text per every packet it intercepts. Each line starts with a time stamp. It tells you very precise time when packet arrived.

Next comes protocol name. Unfortunately, tcpdump understands very limited number of protocols. It won’t tell you the difference between packets belonging to HTTP and for instance FTP stream. Instead, it will mark such packets as IP packets. It does have some limited understanding of TCP. For instance it identifies TCP synchronization packets such as SYN, ACK, FIN and others. This information printed after source and destination IP addresses (if it IP packet).

Source and destination addresses follow protocol name. For IP packets, these are IP addresses. For other protocols, tcpdump does not print any identifiers unless explicitly asked to do so (see -e command line switch below).

Finally, tcpdump prints some information about the packet. For instance, it prints TCP sequence numbers, flags, ARP/ICMP commands, etc.

Here’s an example of typical tcpdump output.

17:50:03.089893 IP 69.61.72.101.www > 212.150.66.73.48777: P 1366488174:1366488582
(408) ack 2337505545 win 7240 <nop,nop,timestamp 1491222906 477679143>

This packet is part of HTTP data stream. You can see meaning of each and every field in the packet description in tcpdump’s manual page.

Here’s another example

17:50:00.718266 arp who-has 69.61.72.185 tell 69.61.72.1

This is ARP packet. It’s slightly more self explanatory than TCP packets. Again, to see exact meaning of each field in the packet description see tcpdump’s manual page.

Invocation continuedBACK TO TOC

Now, when we know how to invoke tcpdump even when connecting to the computer over some net, let’s see what command line switches are available for us.

Choosing an interfaceBACK TO TOC

We’ll start with a simple one. How to dump packets that arrived and sent through a certain network interface. -i command line argument does exactly this.

# tcpdump -i eth1

Will cause tcpdump to capture packets from network interface eth1. Or, considering our SSH/telnet experience:

# tcpdump -i eth1 not port 22

Finally, you can specify any as interface name, to tell tcpdump to listen to all interfaces.

# tcpdump -i any not port 22

Turning off name resolutionBACK TO TOC

As we debug networking issues, we may encounter a problem with how tcpdump works out of the box. The problem is that it tries to resolve every single IP address that it meets. I.e. when it sees an IP packet it asks DNS server for names of the computers behind IP address. It works flawlessly most of the time. However, there are two problems.

First, it slows down packet interception. It’s not a big deal when there are only few packets, but when there are thousands and tens of thousands it introduces a delay into the process. Amount of delay can be different, depending on the traffic.

Another, much more serious problem occurs when there is no DNS server around or when DNS server is not working properly. If this is the case, tcpdump spends few seconds trying to figure out two hostnames for each IP packet. This means virtually stopping intercepting the traffic.

Luckily there is a way around. There is an option that causes tcpdump to stop detecting hostnames and that is -n.

# tcpdump -n

And here are few variations of how you can use this option in conjunction with options that we have learned already.

# tcpdump -n -i eth1
# tcpdump -ni eth1 not port 22

Limiting number of packets to interceptBACK TO TOC

Here are few more useful options. Sometimes amount of traffic that goes in and out of your computer is very high, while all you want to see is just few packets. Often you want to see who sends you the traffic, but when you try to capture anything with tcpdump it dumps so many packets that you cannot understand anything. This is the case when -c command line switch becomes handy.

It tells tcpdump to limit number of packets it intercepts. You specify number of packets you want to see. tcpdump will capture that number of packets and exit. This is how you use it.

# tcpdump -c 10

Or with options that we’ve learned before.

# tcpdump -ni eth1 -c 10 not port 22

This will limit number of packets that tcpdump will receive to 10. Once received 10 packets, tcpdump will exit.

Saving captured dataBACK TO TOC

One of the most useful tcpdump features allows capturing incoming and outgoing packets into a file and then playing this file back. By the way, you can play this file not only with tcpdump, but also with WireShark (former Ethereal), the graphical packet analyzer.

You can do this with -w command line switch. It should be followed by the name of the file that will contain the packets. Like this:

# tcpdump -w file.cap

Or adding options that we’ve already seen

# tcpdump -ni eth1 -w file.cap not port 22

Changing packet size in the capture fileBACK TO TOC

By default, when capturing packets into a file, it will save only 68 bytes of the data from each packet. Rest of the information will be thrown away.

One of the things I do often when capturing traffic into a file, is to change the saved packet size. The thing is that disk space that is required to save the those few bytes is very cheap and available most of the time. Spending few spare megabytes of your disk space on capture isn’t too painful. On the other hand, loosing valuable portion of packets might be very critical.

So, what I usually do when capturing into a file is running tcpdump with -s command line switch. It tells tcpdump how many bytes for each packet to save. Specifying 0 as a packet’s snapshot length tells tcpdump to save whole packet. Here how it works:

# tcpdump -w file.cap -s 0

And with conjunction with options that we already saw:

# tcpdump -ni eth1 -w file.cap -s 0 -c 1000 not port 22

Obviously you can save as much data as you want. Specifying 1000 bytes will do the job for you. Just keep in mind that there are so called jumbo frames those size can be as big as 8Kb.

Reading from capture fileBACK TO TOC

Now, when we have captured some traffic into a file, we would like to play it back. -r command like switch tells tcpdump that it should read the data from a file, instead of capturing packets from interfaces. This is how it works.

# tcpdump -r file.cap

With capture file, we can easily analyze the packets and understand what’s inside. tcpdump introduces several options that will help us with this task. Lets see few of them.

Looking into packetsBACK TO TOC

There are several options that allow one to see more information about the packet. There is a problem though. tcpdump in general isn’t giving you too much information about packets. It doesn’t understand different protocols.

If you want to see packet’s content, it is better to use tools like Wireshark. It does understand protocols, analyzes them and allows you to see different fields, not only in TCP header, but in layer 7 protocols headers.

tcpdump is a command line tool and as most of the command line tools, its ability to present information is quiet limited. Yet, it still has few options that control the way packets presented.

Seeing Ethernet header for each packetBACK TO TOC

-e command line switch, causes tcpdump to present Ethernet (link level protocol) header for each printed packet. Lets see an example.

# tcpdump -e -n not port 22

Controlling time stampBACK TO TOC

There are four command line switches that control the way how tcpdump prints time stamp. First, there is -t option. It makes tcpdump not to print time stamps. Next comes -tt. It causes tcpdump to print time stamp as number of seconds since Jan. 1st 1970 and a fraction of a second. -ttt prints the delta between this line and a previous one. Finally, -tttt causes tcpdump to print time stamp in it’s regular format preceeded by date.

Controlling verbosityBACK TO TOC

-v causes tcpdump to print more information about each packet. With -vv tcpdump prints even more information. As you could guess, -vvv produces even more information. Finally -vvvv will produce an error message telling you there is no such option :D

Printing content of the packetBACK TO TOC

-x command line switch will make tcpdump to print each packet in hexadecimal format. Number of bytes that will be printed remains somewhat a mystery. As is, it will print first 82 bytes of the packet, excluding ethernet header. You can control number of bytes printed using -s command line switch.

In case you want to see ethernet header as well, use -xx. It will cause tcpdump to print extra 14 bytes for ethernet header.

Similarily -X and -XX will print contents of packet in hexadecimal and ASCII formats. The later will cause tcpdump to include ethernet header into printout.

Packet filteringBACK TO TOC

We already saw a simple filter. It causes tcpdump to ignore SSH packets, allowing us to run tcpdump from remote. Now lets try to understand the language that tcpdump uses to evaluate filter expressions.

Packet matchingBACK TO TOC

We should understand that tcpdump applies our filter on every single incoming and outgoing packet. If packet matches the filter, tcpdump aknownledges the packet and depending on command line switches either saves it to file or dumps it to the screen. Otherwise, tcpdump will ignore the packet and account it only when telling how many packets received, dropped and filtered out when it exits.

To demostrate this, lets go back to not port 22 expression. tcpdump ignores packets that either sourced or destined to port 22. When such packet arrives, tcpdump applies filter on it and since the result is false, it will drop the packet.

More qualifiersBACK TO TOC

So, from what we’ve seen so far, we can conclude that tcpdump understands a word port and understands expression negation with not. Actually, negating an expression is part of complex expressions syntax and we will talk about complex expressions a little later. In the meantime, lets see few more packet qualifiers that we can use in tcpdump expressions.

We’ve seen that port qualifier specifies either source or destination port number. In case we want to specify only the source port or only the destination port we can use src port or dst port. For instance, using following expression we can see all outgoing HTTP packets.

# tcpdump -n dst port 80

We can also specify ranges of ports. portrange, src portrange and dst portrange qualifiers do exactly this. For instance, lets see a command that captures all telnet and SSH packets.

# tcpdump -n portrange 22-23

Specifying addressesBACK TO TOC

Using dst host, src host and host qualifiers you can specify source, destination or any of them IP addresses. For example

# tcpdump src host alexandersandler.net

Will print all packets originating from alexandersandler.net computer.

You can also specify Ethernet addresses. You do that with ether src, ether dst and ether host qualifiers. Each should be followed by MAC address of either source, destination or source or destination machines.

You can specify networks as well. The net, src net and dst net qualifiers do exactly this. Their syntax however slighly more complex than those of a single host. This is due to a netmask that has to be specified.

You can use two basic forms of network specifications. One using netmask and the other so called CIDR notation. Here are few examples.

# tcpdump src net 67.207.148.0 mask 255.255.255.0

Or same command using CIDR notation.

# tcpdump src net 67.207.148.0/24

Note the word mask that does the job of specifying the network in first example. Second example is much shorter.

Other qualifiersBACK TO TOC

There are several useful qualifiers that don’t fall under any of the categories I already covered.

For instance, you can specify that you are interested in packets with specific length. length qualifier does this. less and greater qualifiers tell tcpdump that you are interested in packets whose length is less or greater than value you specified.

Here’s an example that demonstrates these qualifiers in use.

# tcpdump -ni eth1 greater 1000

Will capture only packets whose size is greater than 1000 bytes.

Complex filter expressionsBACK TO TOC

As we already saw we can build more complex filter expressions using tcpdump filters language. Actually, tcpdump allows exceptionally complex filtering expressions.

We’ve seen not port 22 expression. Applying this expression on certain packet will produce logical true for packets that are not sourced or destined to port 22. Or in two words, not negates the expression.

In addition to expression negation, we can build more complex expressions combining two smaller expression into one large using and and or keywords. In addition, you can use brackets to group several expressions together.

For example, lets see a tcpdump filter that causes tcpdump to capture packets larger then 100 bytes originating from google.com or from microsoft.com.

# tcpdump -XX greater 100 and \(src host google.com or src host microsoft.com\)

and and or keywords in tcpdump filter language have same precedence and evaluated left to right. This means that without brackets, tcpdump could have captured packets from microsoft.com disregarding packet size. With brackets, tcpdump first makes sure that all packets are greater than 100 bytes and only then checks their origin.

Note the backslash symbol (“\”) before brackets. We have to place them before brackets because of shell. Unix shell has special understanding of what brackets used for. Hence we have to tell shell to leave these particular brackets alone and pass them as they are to tcpdump. Backslash characters do exactly this.

Talking about precedence, we have to keep in mind that in tcpdump’s filter expression language not has higher precedence than and and or. tcpdump’s manual page has very nice example and emphasizes the meaning of this.

not host vs and host ace

and

not (host vs or host ace)

are two different expressions. Because not has higher precedence over and and or, filter from the first example will capture packets not to/from vs, but to/from ace. Second filter example on the other hand will capture packets that are not to/from vs and to/from ace. I.e. first will capture packet from ace to some other host (but not to vs). Yet second example won’t capture this packet.

Repeating qualifiersBACK TO TOC

To conclude this article, I would like to tell you one more thing that may become handy when writing complex tcpdump filter expressions.

Take a look at the following example.

# tcpdump -XX greater 100 and \(src host google.com or microsoft.com\)

We already saw this example, with one little exception. In previous example we had a src host qualifier before microsoft.com and now its gone. The thing is that if we want to use same qualifier two times in a row, we don’t have to specify it twice. Instead we can just write qualifier’s parameter and tcpdump will know what to do.

This makes tcpdump filter expression language much easier to understand and much more readable.

AfterwordBACK TO TOC

I hope you found this article useful. In case you have questions, suggestions or you would like to share your appreciation to the author ;-) , don’t hesitate to mail me to alexander.sandler@gmail.com

Related posts:

'Hacking' 카테고리의 다른 글

Penetration Testing for Web Applications (Part Two)  (0) 2011.08.31
Penetration Testing for Web Applications (Part One)  (0) 2011.08.31
MS, AES security  (1) 2011.08.26
Malware URL  (0) 2011.08.18
Social Networking Security Threats  (0) 2011.08.12
Posted by CEOinIRVINE
l

Tim Cook email to Apple employees: "Apple is not going to change"

In an email designed to reassure Apple employees who might be a bit trepidatious after the Steve Jobs announcement yesterday afternoon, Apple CEO Tim Cook told workers that "Apple is not going to change." The email, leaked to Ars Technica by a reputable source, includes Cook's reassurance that the company will "continue to make the best products in the world" under his guidance.

The text of the email is included in its entirety below.

Team:

I am looking forward to the amazing opportunity of serving as CEO of the most innovative company in the world. Joining Apple was the best decision I've ever made and it's been the privilege of a lifetime to work for Apple and Steve for over 13 years. I share Steve's optimism for Apple's bright future.

Steve has been an incredible leader and mentor to me, as well as to the entire executive team and our amazing employees. We are really looking forward to Steve's ongoing guidance and inspiration as our Chairman.

I want you to be confident that Apple is not going to change. I cherish and celebrate Apple's unique principles and values. Steve built a company and culture that is unlike any other in the world and we are going to stay true to that-it is in our DNA. We are going to continue to make the best products in the world that delight our customers and make our employees incredibly proud of what they do.

I love Apple and I am looking forward to diving into my new role. All of the incredible support from the Board, the executive team and many of you has been inspiring. I am confident our best years lie ahead of us and that together we will continue to make Apple the magical place that it is.

Tim

Posted by CEOinIRVINE
l