'Linux'에 해당되는 글 11건

  1. 2009.04.16 How To Bypass Linux Magazine Membership Check by CEOinIRVINE
  2. 2009.04.16 How to use Linux awk programming and regular expression to read a big log file? by CEOinIRVINE
  3. 2009.03.12 Running Snort as a Linux Daemon by CEOinIRVINE
  4. 2009.02.06 Below is a list of resources you've selected: by CEOinIRVINE
  5. 2009.02.06 CIS BenchMark by CEOinIRVINE
  6. 2009.02.06 CIS BenchMark by CEOinIRVINE 3
  7. 2008.12.26 OS Shoot Out by CEOinIRVINE
  8. 2008.11.18 Snort Configuration : Linux by CEOinIRVINE
  9. 2008.10.15 Linux Daylight Saving Time by CEOinIRVINE
  10. 2008.10.02 Linux Memory/CPU administration by CEOinIRVINE

How To Bypass Linux Magazine Membership Check

Linux Magazine is one of my many email subscription where I read about Linux-related information.

At this time being, each of the Linux Magazine articles is available to public at zero cost, except that if you want hardcopy delivered to doorstep!


Although reading online is FOC, it requires one to register and login as member in order to read articles in full, posting comments, edit email subscription, download white-paper, etc.



If you just want to read the articles, not keen to register as member, forgot the login credential, etc, there are at least two ways to bypass the Linux Magazine membership restriction.

These tricks have been used to bypass Experts Exchange membership restriction as well, so as other premium content that requires membership login.

As long as these premium content allows Google crawlers to index their pages in full, you can pretend as a Googlebot to browse the membership-only content in full as well, i.e. counting on Googlebot membership privilege!

Using Google Cache

Ask Google to search for the URL and click to browse Google Cache version of the article. For example,

This original URL will display excerpt that require reader to register and login for full content.

This Google Cache version of the orginal URL allows reader to read the full article as what the Google Crawler can “see”, i.e. click the “Cached” (highlight in blue color) link.

Read the Google Cache version of Linux Magazine article without register and login as member.

Tweaking a web browser User Agent String

Purposely and manually change the default User Agent String of your favorite web browser, as this screenshot in Firefox 2.0.0.7.



replace anything in “general.useragent.extra.firefox” to become

Googlebot/2.1 (+http://www.googlebot.com/bot.html)

in order to spoof the Firefox User Agent String as Googlebot!

Posted by CEOinIRVINE
l


How to use Linux awk programming and regular expression to read a big log file?

Use the Linux tail command to analysis the log file content, in order to understand log entries pattern.

Using the db2diag.log as an example, each event / incident is initiated with a line that contains date and time:

2008-01-02-10.52.47.720435+480 I1840G300          LEVEL: Event
Then, I use the awk and its regular expression to filter out all log entries that match the particular day and hour of interest:

First, find out the record number of first log entry that match the date and time pattern using its regular expression (RegEx) function:

awk '{if ($1 ~ /2008-01-16-17/){print NR}}' < db2diag.log | head -1
Next, find out the record number of last log entry that match the date and time pattern:

awk '{if ($1 ~ /2008-01-16-17/){print NR}}' < db2diag.log | tail -1
Finally, use awk again to extract or filter all log entries within the range of first and last record numbers that we’ve known from last two steps:

awk '{if (NR >= 7529 && NR <= 8382){print $0}}' < db2diag.log
Because the nature of db2diag.log, the last record number I get from awk doesn’t include the detail of DB2 event / incident happened on that particular time. Thus, I purposely top up the “last record number” (suppose the last record number reported by awk command is 8382, I rest it to be 8390):

awk '{if (NR >= 7529 && NR <= 8390){print $0}}' < db2diag.log >tempfile
If you would like to output the extracted log entries to another temporarily file, just redirect the standard output of awk command to a temp file as you wish (e.g. append >tempfile to the end of last awk command sample).

Brief note about the awk programming syntax used in the sample codes at above:

$1 ~ /2008-01-16-17/ means to check if 1st field/column text pattern matches with the regular expression (i.e. 2008-01-16-17).


Unless the field separator (FS) is specified, awk regards space as field separator by default.

The first field (a.k.a column) of a line (awk treats each line as a record) is denoted as $1, 2nd field as $2, and so forth. The $0 is simply means all the fields/columns of the line/record.

Thus, the combination of awk programming and organized text files can form a simple database system!


The awk regular expression pattern is enclosed by a pair of slash character (/).

The awk RegEx operator for match comparison is a tilde/swung dash character (~). (refer to GNU awk notes on Regular Expression).

print NR is meant to print the record number (NR), i.e. the line number in the log file. To print the number of field/column in a line/record, use NF

'Hacking' 카테고리의 다른 글

fantasy baseball  (0) 2009.04.24
How To Bypass Linux Magazine Membership Check  (0) 2009.04.16
download musics mp3 at shared libary iTunes  (0) 2009.04.08
US iPod repairman guilty of fraud  (0) 2009.04.07
Visa, MasterCard In Security Hot Seat  (0) 2009.04.01
Posted by CEOinIRVINE
l

Running Snort as a Linux Daemon

IT 2009. 3. 12. 06:25
Running Snort as a Linux Daemon

Problem

You run a Linux machine and you want to run Snort in the background, starting up at boot time.

Solution

Snort provides a daemon mode to allow it to run in the background. This is activated by using the -D switch.

[root@frodo rules]# snort -D -c /etc/snort/snort.conf -l
/var/log/snort
[root@frodo rules]# ps -ef | grep snort
root     10738     1  0 11:34 ?  00:00:00 snort -D -c
/etc/snort/snort.conf -l /var/log/snortDiscussion

You'll probably want to run Snort like this: starting at boot and running in the background. If you want to start Snort earlier in the boot sequence, consult your system documentation as to how to edit the boot scripts.

The exact methods for starting Snort at boot vary slightly from distribution to distribution. There are likely to be some slight differences between the exact methods of setting this up on each different Linux distribution. The simplest method, if your system supports it, is to modify the /etc/rc.d/rc.local script. This script runs after all the other init scripts on the system, so your system will be unmonitored between the start of network services and the start of Snort. Add a line similar to the following to your rc.local script:

/usr/local/bin/snort -D -c /etc/snort/snort.conf -l /var/log/snort

You must verify the locations that are relevant to your particular setup. There is an example Snort startup script in /snort-2.x.x/contrib./S99snort.

Running Snort as a daemon is useful only if you are getting good notification from Snort about potential intrusions; otherwise, you are effectively ignoring it. You should refer to the other recipes regarding alerting.

'IT' 카테고리의 다른 글

iPhone OS Preview  (0) 2009.03.14
iPhone OS Preview  (0) 2009.03.14
Snort Installation  (0) 2009.03.12
New iPod speaks names of artists and songs  (0) 2009.03.12
Palm Confident About Pre Phone  (0) 2009.03.11
Posted by CEOinIRVINE
l

Below is a list of resources you've selected:

'Hacking' 카테고리의 다른 글

XSS Cheat Sheet  (0) 2009.02.06
CIS benchmarks  (0) 2009.02.06
Security Metrics  (0) 2009.02.06
CIS BenchMark  (0) 2009.02.06
CIS BenchMark  (3) 2009.02.06
Posted by CEOinIRVINE
l

CIS BenchMark

Hacking 2009. 2. 6. 04:53

'Hacking' 카테고리의 다른 글

Below is a list of resources you've selected:  (0) 2009.02.06
Security Metrics  (0) 2009.02.06
CIS BenchMark  (3) 2009.02.06
Security Checklists  (0) 2009.02.06
How to Make Sigs and finding packet id's to get Addresses  (0) 2009.02.04
Posted by CEOinIRVINE
l

CIS BenchMark

Hacking 2009. 2. 6. 04:35

Below is a list of resources you've selected:

'Hacking' 카테고리의 다른 글

Security Metrics  (0) 2009.02.06
CIS BenchMark  (0) 2009.02.06
Security Checklists  (0) 2009.02.06
How to Make Sigs and finding packet id's to get Addresses  (0) 2009.02.04
maxmind.com : anti proxy , block proxy users  (0) 2009.01.30
Posted by CEOinIRVINE
l

OS Shoot Out

IT 2008. 12. 26. 03:02

OS shoot-out: Windows vs. Mac OS X vs. Linux

Editor's Note: This article is reprinted from InfoWorld. For more IT news, subscribe to the InfoWorld Daily newsletter.

The Mac's been on a roll, both due to its highly regarded Mac OS X Leopard operating system and to an unhappy reception for Microsoft's Windows Vista. The result: For the first time in memory, the Mac's market share has hit 9.1 percent, according to IDC data, and Windows' market share has dipped below 90 percent. (Linux distributions make up the rest.)

But can either Mac OS X or Linux be more than a niche OS? After all, Windows runs practically everything, from widely used productivity apps such as spreadsheets to highly niche applications such as chemical modeling. Mac OS X and Linux simply don't have the app base that Windows does. Of course, the fact you can run Windows on a Mac or Linux system, thanks to Parallels Desktop and EMC VMware Fusion, lets you have your cake and eat it too.

[ Find out the deployment secrets of Vista adopters and see how the beta Windows 7 performs | Follow InfoWorld's guides on switching to Mac OS X and switching to Linux. ]

For some users -- often technically savvy people such as engineers, consultants, designers, and CTOs -- it is clearly an option that already works quite well. In the past year, running Mac OS X or Linux as your default OS has been made easier by the capability to run Windows in a virtual machine, giving you access to both Windows-only applications and Web sites that rely on Microsoft's Internet Explorer-only ActiveX technology. But in a business environment, switching to a Mac or Linux PC may not be quite as easy.

The Windows option

Despite the increasing adoption of alternatives to Windows, the Microsoft OS remains the standard choice for the vast majority of businesses. After all, it's been their standard for nearly two decades; they know it, have become dependent on it, and understand its capabilities and limitations. Plus, it's backed by a company that puts a lot of resources into maintaining, supporting, and enhancing the OS for its very wide user base -- and has a huge third-party support system, from vendors to consultants.

For most businesses, considering something other than Windows is not even a question; their concern is when to shift to a new version of Windows. Still, as users (re)discover the Mac and questions over Windows' long-term resource requirements hang in the air, some are considering alternatives to, or at least supplements for, Windows in the form of Mac OS X and Linux.

The Mac OS X option

Of the plausible alternatives to Windows, Apple's Mac OS X has the largest market share and history. InfoWorld chief technologist Tom Yager has written that the latest version of the Mac OS, Leopard (10.5), is simply the best operating system available. And Macs are indeed popping up more frequently even within IT circles -- I've seen more MacBook Pros in the hands of CTOs and IT execs at conferences in the past year more than ever before. Although there are no real numbers on just the business adoption of Macs, it's clear that Apple is in growth mode, gaining an increasing proportion of all new computer sales for more than a year now.

Many businesses have already adopted the Mac as a standard platform, discovering that the hardware is typically better designed than equivalent Windows systems for the same price, that security risks are lower, and that there are more enterprise-quality management tools than they expected. InfoWorld has chronicled how to make the switch to Mac OS X.

The drive for Mac adoption often comes from users, not IT. InfoWorld's Yager has chronicled the adventures of one PC user who switched to the Mac OS, showing that for an individual, the conversion was ultimately a rewarding one.

A key tool for any Mac OS X switcher is a virtual machine to run Windows for those apps and Web sites that require it. Both Parallels Desktop 3.0 and VMware's Fusion software will do the trick, as InfoWorld's comparative review has shown.

Although Macs are compatible with most typical hardware, such as monitors and drives, fitting a Mac into an enterprise's management systems and ERP applications can be a different story. Yager's Enterprise Mac blog and the Mac Enterprise user group both provide advice on managing Macs in a traditional IT environment.

The Linux option

The more technically inclined may be attracted to Linux, the most popular form of desktop Unix. Linux desktops typically are challenged by limited hardware compatibility (due to lack of drivers), limited application options, and user interfaces that require active participation to get work done, which tends to keep Linux away from the general user population. Still, it's possible to do, and InfoWorld has chronicled how to make the switch to Linux.

But those who work with a Linux server all day may find that using it on the desktop as well actually makes their lives easier.

Just as Mac users need occasional access to Windows, so do Linux users. Because Linux distributions run on Windows-compatible hardware, it's straightforward to use desktop virtualization software, such as Parallels Workstation, Sun's (formerly Innotek's) VirtualBox, and VMware's Workstation software, to provide access to both environments.

Although some enterprises have committed to wide Linux deployment -- such as automaker Peugeot Citroen's plans to install 20,000 Novell Suse Linux desktops -- most have left Linux to the engineering and development staff.

InfoWorld Enterprise Desktop blogger Randall Kennedy argues that desktop Linux is doomed to remain a tiny niche OS, given the Linux community's lack of interest in providing a UI that regular people could use. Kennedy tried to spend a week working on nothing but the Ubuntu distribution of Linux but gave up on the fifth day.

But Kennedy's take isn't the last word on desktop Linux. Frequent InfoWorld contributor Neil McAllister put together a special report on how to move from Windows to Linux, concluding that the effort was not as hard as you might think.

Who's right? As with any platform choice, they both may be. A one-size-fits-all approach may be unrealistic. And that likely explains why many businesses will have a mix, dominated by Windows XP today (and perhaps Windows 7 in a few years) but not exclusively tied to Microsoft's OS.

'IT' 카테고리의 다른 글

AP source: Ex-eBay CEO to run for Calif. governor  (0) 2009.01.06
PowerDVD  (2) 2009.01.06
Rumor Mac Mini  (0) 2008.12.26
iPhone 3G  (0) 2008.12.26
Google, Apple, Microsoft sued over file preview  (0) 2008.12.26
Posted by CEOinIRVINE
l

Snort Configuration : Linux

Hacking 2008. 11. 18. 09:17

Configuring Snort on a Linux System

The process of installing Snort on a Linux system is very close to the process on a Windows system. The primary difference is that the default (relative) paths in the snort.conf file are much more likely to work without modification on the Linux system. You will need to download the latest version of Snort that is appropriate for your system. If you are using Fedora Core 5, this is as simple as typing yum install snort, or you could download and install the.rpm from snort.org.

Configuring Snort Options

The next step is to configure the various options that determine how Snort will behave using the Snort configuration file. The configuration file is excellently documented and very easy to use. To get Snort working the way you want it to, follow these simple steps.

  1. Start by opening the main Snort configuration file. By default it will be located at /etc/snort/snort.conf.

  2. Configure the HOME_NET variable, if desired, by removing the # from the line you need. # is a commend indicator in the Snort configuration file. The HOME_NET variable defines which networks are the "trusted" internal networks. This is used with the signatures to determine when the internal network is being attacked. By default, HOME_NET is set to any network with the var HOME_NET any line in the snort.conf. Setting this to accurately reflect your internal address space will reduce the number of false positive alerts you receive. A common example would be var HOME_NET 192.168.1.0/ 24 or perhaps var HOME_NET [192.168.1.0/24,192.168.2.0/24].

  3. Configure the EXTERNAL_NET variable if desired. This is the network you expect attacks to come from. The recommendation is to set this to everything except your HOME_NET using the following: var EXTERNAL_NET !$HOME_NET. (Default: var EXTERNAL_ NET any.)

  4. Next, define what servers are running specific services. For example, by setting HTTP_SERVERS to only specific servers, Snort will only watch for HTTP attacks targeted at those servers. If you wish to see attacks targeting servers that are not running the affected services, leave the defaults, which are to watch for attacks directed towards any internal servers. (Default: var DNS_SERVERS $HOME_NET) If you had a Web server running on 192.168.1.11 and 192.168.1.12, you could tell Snort to only look for HTTP attacks targeting that server by setting the following variable: var HTTP_SERVERS [192.168.1.11/32,192.168.1.12/32].

  5. If desired, configure the specific ports that services are available on. For example, the default for HTTP is defined on the following line: var HTTP_PORTS 80. Similar to defining the servers in the preceding section, this will tell Snort to only look for attacks targeting specific ports. With the default configuration, Snort would ignore an HTTP attack to port 8080. Again, this setting will help focus where Snort looks for different types of attacks to occur.

  6. If you are interested in detecting the usage of AOL Instant Messenger (AIM), the various IP addresses of the AIM servers are defined in the snort.conf file. This is done because the IP addresses change frequently, and by using a variable, the rules don't have to be updated each time the IP address changes. If you don't wish to trigger based off AIM usage, don't worry about changing these IP addresses.

  7. Download the Snort rules from http://www.snort.org/rules. Click Download Rules on the right-hand side of the page. On the Download Rules page, scroll down to the section labeled Sourcefire VRT Certified Rules (unregistered user release). Download the latest ruleset.

  8. Extract the rules (and /docs) to the location of your choice, typically /etc/snort/rules and /etc/snort/docs.

  9. Configure the RULE_PATH variable, which tells Snort where to find the rules used for triggering events. You can use a relative path such as var RULE_PATH ../rules or an absolute path such as /etc/ snort/rules.

  10. The next section has some commented out lines to disable certain detections of some infrequently seen types of traffic. Unless you are having some issues with those alerts or your IDS is very low on resources, it's probably fine to just leave those at the default (enabled) configuration.

  11. The last few lines of the "step 1" section enable you to configure the detection engine for systems with limited resources. Unless you are having issues, you can leave this option alone.

  12. After that the "step 2" and "step 3" sections of the configuration file allow you to enable or disable specific functionality and detect particular types of attack, such as fragmentation attacks, stateful inspection, and stream reassembly options. Delving into the specifics of each of those options is beyond the scope of this chapter and for many, the default setting will serve them well.

  13. The section labeled Step #4 contains output options for Snort. Uncomment output alert_syslog: LOG_AUTH LOG_ALERT (the default). Despite what facility and severity you configure here, the snort alerts will be generated as auth.info. You also need to include the –s switch on the command line to enable syslog logging. We will discuss syslog in more detail in the next chapter. If you don't have a syslog server to log to yet, just make note of the setting and come back to it when your syslog server is set up.

    Using the preceding example of LOG_AUTH and LOG_ALERT, you would need the following in your syslog.conf file to log to a syslog server at 192.168.1.99:

    auth.info @managmentserverIP

    If you are using syslog-ng, you would need a logging destination defined, a filter that specifies what events to capture, and a log statement in the syslog-ng.conf file. An example of this configuration would be the following:

    destination d_lab { udp ("192.168.1.99" port(514)); };
    filter f_most { level(info..emerg); };
    log { source(s_sys); filter(f_most); destination(d_lab); };

  14. Edit the paths for the dynamically loaded libraries in section #2 to point to the proper path. Depending on your Linux distribution and installation method, these paths may not be the default. For example, on Fedora Core 5, using yum to install Snort, the settings would use the following paths: dynamicpreprocessor directory /usr/lib/snort/dynamicpreprocessor and dynamicengine /usr/lib/snort/libsf_engine.so. If you receive an error when you try to run Snort, along the lines of Unknown rule type: dynamicpreprocessor directory or Unknown rule type: dynamicengine, then your installation of Snort is not configured to use dynamically loaded processors. In this case, simply place a # in front of both of those lines to comment them out.

  15. The last section (Step #6), contains various include statements that specify the rulesets to be checked. Some rules are disabled by default, such as chat.rules, which is triggered by the use of various instant messaging clients. To enable or disable a given ruleset, simply add or remove a # at the beginning of the include line. This entry can be left as a relative path (for example, include $RULE_PATH/ local.rules) because the RULE_PATH variable will be expanded to make it an absolute path.

  16. If you need any custom rules that are not included with the standard Snort release, you can download rules provided by the Snort community from the Rules page on the Snort Web site. If you are looking for something unusual, you might find it there without having to create the rule yourself.

You are now ready to start up Snort and see what it looks like in action. When you start Snort you can specify the interface to listen on using the –i switch such as –i eth0. If you don't specify, it will use the first interface. Use the –c option to tell Snort which configuration file to use. It can be useful to have multiple configuration files configured so you can quickly switch configurations for special circumstances. You could prepare different configuration files to home in on certain issues, segments, or more in-depth logging. Another important option is –A, which tells Snort what type of alerts to generate. The options are fast, full, console, or none.

The following command example would start Snort listening on the first interface (no –i used), with alerts going to the console only, using the configuration file at /etc/snort/snort.conf. The –l switch tells Snort where the logging directory is located. The –K switch tells Snort what types of logs to generate. ASCII logs are easier for a human to read, but they take a little more time to log. If speed isn't a concern, the ASCII logs will probably be the easiest to read and analyze.

snort --A console --c /etc/snort/snort.conf --l /etc/snort/log --K ascii

You should see any triggered rules produce a message on the console and logged to your syslog server. If you add the –s switch to the end of the line, it will tell snort to log to the syslog server you have configured in the snort.conf file; however, it will not also display on the snort console. If you want to create a rule for testing purposes to see what the results look like, create a test rule file, such as TESTING.rules, and place it in the rules folder (/etc/snort/rules, in this example). In this file you could place the following line, which would trigger on any attempts to ping another system.

Alert icmp any any -> any any (msg:"TEST rule";sid: 1000001;)

Edit the snort.conf to read your new rule by inserting the following statement towards the end of the file: include $RULE_PATH/TESTING.rules. . As a last step, edit the snort\stc\sidmsg.map file. This file provides a mapping between snort alert messages and alert IDs or numbers. Custom alerts should use an ID number of more than one million. Add the following line at the end of the file:

1000001

Placing the ID number is the minimum requirement for Snort not to output an error. You can certainly fill in all the other fields, following the existing message maps as a guideline. When this is done, you will need to stop and restart Snort. Here is a partial display of the console output of a single ping and the reply.

10/12-21:29:35.911089 [**] [1:0:0] TEST rule [**] [Priority: 0] {ICMP}
192.168.1.99 -> 192.168.1.103
08/10-18:22:20.284438 [**] [1:0:0] TEST rule [**] [Priority: 0] {ICMP}
192.168.1.103 -> 192.168.1.99

You can also add your own custom rules to the local.rules file. When you open the file, you will find it is essentially empty, existing solely for you to place your custom rules in it. The local.rule is "included" in the snort.conf by default, so you will not need to add it there. You will, however, still need to edit the sid-msg.map file for any rules placed in local.rules.

The –A option will alter the display of the alerts on the console, while the –K option controls how the alerts are logged to the log directory. You should experiment with the different display formats to find the one that provides adequate information with the minimal strain on the Snort host. For day-to-day operations you would probably want to use fast alerts in your log files, which look like the ones that are sent to the console with the console option. Available alert modes and logging formats are outlined here for handy reference.

  • -A console Logs to the console in the following format:

    10/12-21:29:35.911089 [**] [1:0:0] TEST rule [**] [Priority: 0] {ICMP}
    192.168.1.99 -> 192.168.1.103
  • -A fast Logs in the same format as console, but writes the alerts to a /snort/alert file with no output to the console.

  • -A full Logs to the /snort/alert file in the following format:

    [**] [1:0:0] TEST rule [**]
    [Priority: 0]
    10/12-21:38:53.741606 192.168.1.103 -> 192.168.1.99
    ICMP TTL:64 TOS:0x0 ID:6350 IpLen:20 DgmLen:60
    Type:0 Code:0 ID:512 Seq:7936 ECHO REPLY
  • –K pcap This is the default mode if you don't specify an alternate format on the command line. This file will contain the alert packets in their entirety. You can open this file using a network sniffer such as Wireshark.

  • –K ascii Will create a folder under /log for each IP address. Within that folder each rule will create a log file. The log entries will be the same format as the "full" alert format.

  • –K none No log file will be created.

Congratulations! You now have a working IDS. Figure 4.14 shows the syslog alerts from the TESTING.rule in the Kiwi Syslog Daemon console.

Image from book
Figure 4.14: Snort Alerts in Kiwi Syslog Daemon Console

Using a GUI Front End for Snort

Like the Windows version of Snort, some have felt the administration of Snort could be improved upon by implementing a more robust GUI interface. There are several Snort GUIs to choose from aimed at both the configuration of Snort, as well as the interpretation of the Snort alerts. Some really only offer buttons to configure options on the Snort command line, and offer very little additional functionality, while others bring some very powerful additional features to the table. We will discuss the operation of some of the better offerings in the next section.

Basic Analysis and Security Engine

Basic Analysis and Security Engine (BASE) is available for download from http://www.base.secureideas.net/about.php. The purpose of BASE is to provide a Web-based front end for analyzing the alerts generated by Snort. Base was derived from the ACID project (Analysis Console for Intrusion Databases). Whereas ACID is more of a general-purpose front end for viewing and searching for events, BASE is a Snort-specific utility. The instructions to configure BASE assume you have already installed and configured Snort. Snort must be installed with the —with-mysql switch because Snort does not support MySQL output by default. The Snort Web site has RPM packages with MySQL support already included for some operating systems. This is the list of dependencies for running BASE: httpd, Snort (with MySQL support), MySQL, php-gd, pcre, php-mysql, php-pdo, php-pear-Image-GraphViz, graphviz, and php-adodb. Follow these steps to get BASE up and running.

  1. Download and install MySQL and BASE

  2. Edit the /snort/snort.conf file. Uncomment and edit the following line:

    output database: log, mysql, user=snort password=snortpass dbname=snort
    host=localhost
  3. The next few steps are related to setting up the MySQL database and settings. After you install MySQL, enter the MySQL commands by typing mysql on the command line. This will place you in an interactive command mode. All commands must have a semicolon at the end of the line. By default, the MySQL installation will not have a password set at all. You should add a default password with the following commands.

    mysql
    mysql> SET PASSWORD FOR root@localhost=PASSWORD('somepassword');

    After you have assigned a password to the root account, simply entering mysql will not enable you to access the interactive command mode. After a password has been assigned, use mysql –u <username> –p. You will then be prompted to enter the password for the user you specified (typically root).

  4. The next step is to create the Snort database.

    mysql> create database snort;
  5. You now need to give the Snort user permissions to add the needed tables to the Snort database. Use these commands:

    mysql> grant INSERT, SELECT on root.* to snort@localhost;
  6. You should now set the password for the Snort user to the same password you used in the Snort configuration file.

    mysql> SET PASSWORD FOR snort@localhost=PASSWORD('snortpass');
  7. The next step is to add some additional permissions for the Snort database using the following commands:


    mysql> grant ALL on snort.* to snort@localhost;
    mysql> grant ALL to snort;
    mysql> exit
  8. Now that the database has been created, you need to populate it with the tables Snort uses. Use the following command to create the tables:

    mysql --u root --p < /etc/snort/schemas/create_mysql snort

    When the command completes, it will not give any indication of its success; therefore, it will be necessary to manually verify that the tables were created.


    Tip 

    If the package you installed did not include the /snort/schemas/ directory, you can download the source package and extract the directory from there. With Fedora Core 5, for some reason installing the Snort with MySQL support did not include the schemas directory.

  9. Verify the MySQL tables were created in the Snort database by entering the following commands. You should see output similar to that shown in the following example:

    mysql --u root --p
    show databases;
    +----------+
    | Database |
    +----------+
    | mysql |
    | snort |
    | test |
    +----------+
    use snort;
    show tables;
    +------------------+
    | Tables_in_snort |
    +------------------+
    | data |
    | detail |
    | encoding |
    | event |
    | icmphdr |
    | iphdr |
    | opt |
    | reference |
    | reference_system |
    | schema |
    | sensor |
    | sig_class |
    | sig_reference |
    | signature |
    | tcphdr |
    | udphdr |
    +------------------+
    exit

    The list of databases is not significant, as long as the Snort database exists, of course. The table listing must be accurate. If any are missing, Snort will generate an error when you run it.

  10. Install php-gd, which is used to generate the graphs in BASE. On Fedora Core 5 you can just type yum install php-gd.

  11. Install ADODB, which is a database abstraction library for PHP. On Fedora you can simply enter yum install php-adodb.

  12. It's now time to configure BASE itself. Edit the /usr/share/basephp4/base_conf.php file to ensure that the following lines are configured with paths and settings appropriate for your configuration.

    $BASE_urlpath = '/base';
    $DBlib_path = '/usr/share/ododb';
    $DBtype = 'mysql';
    $alert_dbname = 'snort';
    $alert_host = 'localhost';
    $alert_port = '';
    $alert_user = 'snort';
    $alert_password = 'snortpass';

    You should not be able to access the BASE Web page at the following URL: http://localhost/base/.


    The very first time you start up BASE, none of the database tables have been created. You will see something like the page shown in Figure 4.15.

    Image from book
    Figure 4.15: BASE Setup

  13. Click on the Setup page link.

  14. Click the Create BASE AG button on the right-hand side. You see several success messages as shown in Figure 4.16.

    Image from book
    Figure 4.16: BASE Success

  15. Click the Main Page link. This should take you to the primary BASE interface as shown in Figure 4.17.

Image from book
Figure 4.17: BASE Main Page

Although this window may not be too flashy, there is a wealth of information you can discover. Most of the fields are actually links. By clicking to the right of Today's alerts, for example, you can get a sorted list of unique alerts, a listing of all alerts, or a list sorted by source IP address or destination IP address. The other headings along the left side offer similar functionality. Of particular note are the links for the Most Frequent 15 addresses by source address. This would enable you to quickly see which systems are generating the majority of your alerts. If you open that window (shown in Figure 4.18) there are several additional fields that are also hyperlinked.

Image from book
Figure 4.18: BASE Most Frequent by Source IP

Note the field at the bottom labeled ACTION. This enables you to configure the alert groups. Alert groups are basically shortcuts to enable you to view a subset of alerts quickly, without having to navigate through the various menus to get there. For example, suppose you want to know anytime that 192.168.1.1 generates an alert. You can check the check box to the left of 192.168.1.1, and then use the {action} drop-down box to select Create AG (by Name). In the action column, enter .1_ALERTS to use as the alert group name. Finally, click Selected.

The next screen enables you to enter a description for the newly created alert group. Enter a meaningful text description for the group and click Save Changes. The next screen will be a listing of all alerts from 192.168.1.1. This screen is the alert group. In the future, if you want to quickly see this group of alerts, you can click Alert Group Maintenance at the bottom of each page, and then click the alert group you want to view. In this way, any subset of alerts is only two clicks away, sort of like a shortcut straight to a particular set of filtering criteria.

Another feature of note is the Administration link at the bottom of each page. This will take you to a screen where you can configure users for BASE. There are four options on the administration screen: list users, create a user, list roles, and create a role. These screens enable you to create users and assign them to various roles. If you click List Roles, you can see the four predefined roles. If you want to assign a user in the administrator role, simple click Create a user. Enter the login name, a full name or description, and a password. Use the drop-down box to select a role and then click Submit Query. None of the settings here will take effect until you edit the base_conf.php file and change the value of $Use_Auth_System = 1;. A value of 0 (the default) means the authentication is disabled and everyone has full access to BASE. Only the admin role has access to the administration screen.


Tip 

Remember the different logging options for Snort on the command line. Previously, we used –A console, which would log Snort events to the Snort terminal. If you are going to be using a different front end for viewing Snort alerts, there isn't much value in also logging to the console. You can use –A none when starting Snort, which will c

'Hacking' 카테고리의 다른 글

Investigation of Vulnerabilities  (0) 2008.11.21
1.4. Assessment Service Definitions  (0) 2008.11.21
TOP SQL injection tool lists [15]  (0) 2008.11.12
SQL injection  (0) 2008.11.12
TCP flags  (0) 2008.11.07
Posted by CEOinIRVINE
l

Linux Daylight Saving Time

IT 2008. 10. 15. 03:33
How to : DST settings on Redhat 4.0 and any other OS.


Wikipedia defines DST as follows:
Daylight saving time (DST), also known as summer time in British English, is the convention of advancing clocks so that evenings have more daylight and mornings have less. Typically clocks are adjusted forward one hour in late winter or early spring and are adjusted backward in autumn. Details vary by location and change occasionally; see When it starts and stops below.

Do I need to apply the DST patch on my server?

DST patch is only required in few countries such as North America countries (e.g. United States. DST is not required in India and Asia (China, Hong Kong, Macau, Taiwan) at all (however once we had DST). Please see this wikipedia article. It display usage and a short history of daylight saving time by location in alphabetic order.

Daylight Saving Time map image worldwild
(See DST heat map)

Do I need to apply the DST patch on AIX/UNIX/Linux/HP-UX/Solaris/Windows servers running in IST (Indian) timezone?

I’ve received at least 8-10 email regarding IST timetonze. Short answer is no (see heat map).

Task: Verify if you need DST update i.e. display timezone data

Use the zdump utility to display timezone data.
# zdump -v Australia/Queensland
# zdump -v /etc/localtime | grep 2007

If you see date “Sun Mar 11” your system is already patched and no need to read further.

Many of our servers located in north America and all of these server powered by RedHat enterprise Linux or Debian Linux or MS-Windows server 2003.

If you are running RHEL 4.0...

Update tzdata package:
# up2date -u tzdata
# system-config-date

OR
# cp /usr/share/zoneinfo/ America/NewYork /etc/localtime
# zdump -v /etc/localtime |grep 2008

If you are running Debian/Ubuntu Linux...

Use apt-get to update tzdata package:
$ sudo apt-get install tzdata
OR
# apt-get install tzdata

Run tzconfig to update your configuration:
# tzconfig
# zdump -v /etc/localtime |grep 2008

Microsoft Window Server / XP DST

Please see - how to configure daylight saving time for the United States in 2007

Sun Java and DST

A few servers running Sun Java requires update as well.

Manually update DST under Linux

You can manually update your configuration by following these inductions as well.

Check DST using a Webbrowser

You can also check your Linux or Windows workstation by visiting the University of Minnesota's DST time check site (browser with javascript support required).

'IT' 카테고리의 다른 글

Nokia: Investors Cheer Mixed Results  (0) 2008.10.19
The iPhone Isn't A Great Phone  (1) 2008.10.19
Earphones offer good sound, bad fit  (0) 2008.10.14
Tech, Telecom, and Web Earnings Look Bleak  (0) 2008.10.14
[5] Snort Tuning: Reduce False Positives  (0) 2008.10.14
Posted by CEOinIRVINE
l

Linux Memory/CPU administration

IT 2008. 10. 2. 03:02

 

Red Hat Linux comes with a variety of resource monitoring tools. While there are more than those listed here, these tools are representative in terms of functionality. The tools are:

  • free

  • top (and GNOME System Monitor, a more graphically oriented version of top)

  • vmstat

  • The Sysstat suite of resource monitoring tools

Let us look at each one in more detail.

1. free

The free command displays system memory utilization. Here is an example of its output:

             total       used       free     shared    buffers     cached
Mem:        255508     240268      15240          0       7592      86188
-/+ buffers/cache:     146488     109020
Swap:       530136      26268     503868

The Mem: row displays physical memory utilization, while the Swap: row displays the utilization of the system swap space, and the -/+ buffers/cache: row displays the amount of physical memory currently devoted to system buffers.

Since free by default only displays memory utilization information once, it is only useful for very short-term monitoring, or quickly determining if a memory-related problem is currently in progress. Although free has the ability to repetitively display memory utilization figures via its -s option, the output scrolls, making it difficult to easily see changes in memory utilization.

Tip Tip
 

A better solution than using free -s would be to run free using the watch command. For example, to display memory utilization every two seconds (the default display interval), use this command:

watch free

The watch command issues the free command every two seconds, after first clearing the screen. This makes it much easier to see how memory utilization changes over time, as it is not necessary to scan continually scrolling output. You can control the delay between updates by using the -n option, and can cause any changes between updates to be highlighted by using the -d option, as in the following command:

watch -n 1 -d free

For more information, refer to the watch man page.

The watch command runs until interrupted with [Ctrl]-[C]. The watch command is something to keep in mind; it can come in handy in many situations.

2. top

While free displays only memory-related information, the top command does a little bit of everything. CPU utilization, process statistics, memory utilization — top does it all. In addition, unlike the free command, top's default behavior is to run continuously; there is no need to use the watch command. Here is a sample display:

11:13am  up 1 day, 31 min,  5 users,  load average: 0.00, 0.05, 0.07
89 processes: 85 sleeping, 3 running, 1 zombie, 0 stopped
CPU states:  0.5% user,  0.7% system,  0.0% nice, 98.6% idle
Mem:  255508K av, 241204K used,  14304K free,    0K shrd,   16604K buff
Swap: 530136K av,  56964K used, 473172K free                64724K cached

  PID USER   PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
 8532 ed      16   0  1156 1156   912 R     0.5  0.4   0:11 top
 1520 ed      15   0  4084 3524  2752 S     0.3  1.3   0:00 gnome-terminal
 1481 ed      15   0  3716 3280  2736 R     0.1  1.2   0:01 gnome-terminal
 1560 ed      15   0 11216  10M  4256 S     0.1  4.2   0:18 emacs
    1 root    15   0   472  432   416 S     0.0  0.1   0:04 init
    2 root    15   0     0    0     0 SW    0.0  0.0   0:00 keventd
    3 root    15   0     0    0     0 SW    0.0  0.0   0:00 kapmd
    4 root    34  19     0    0     0 SWN   0.0  0.0   0:00 ksoftirqd_CPU0
    5 root    15   0     0    0     0 SW    0.0  0.0   0:00 kswapd
    6 root    25   0     0    0     0 SW    0.0  0.0   0:00 bdflush
    7 root    15   0     0    0     0 SW    0.0  0.0   0:00 kupdated
    8 root    25   0     0    0     0 SW    0.0  0.0   0:00 mdrecoveryd
   12 root    15   0     0    0     0 SW    0.0  0.0   0:00 kjournald
   91 root    16   0     0    0     0 SW    0.0  0.0   0:00 khubd
  185 root    15   0     0    0     0 SW    0.0  0.0   0:00 kjournald
  186 root    15   0     0    0     0 SW    0.0  0.0   0:00 kjournald
  576 root    15   0   712  632   612 S     0.0  0.2   0:00 dhcpcd

The display is divided into two sections. The top section contains information related to overall system status — uptime, load average, process counts, CPU status, and utilization statistics for both memory and swap space. The lower section displays process-level statistics, the exact nature of which can be controlled while top is running.

Warning Warning
 

Although top looks like a simple display-only program, this is not the case. top uses single character commands to perform various operations; if you are logged in as root, it is possible to change the priority and even kill any process on your system. Therefore, until you have reviewed top's help screen (type [?] to display it), it is safest to only type [q] (which exits top).

2.1. The GNOME System Monitor — A Graphical top

If you are more comfortable with graphical user interfaces, the GNOME System Monitor may be more to your liking. Like top, the GNOME System Monitor displays information related to overall system status, process counts, memory and swap utilization, and process-level statistics.

However, the GNOME System Monitor goes a step further by also including graphical representations of CPU, memory, and swap utilization, along with a tabular disk space utilization listing. Here is an example of the GNOME System Monitor's Process Listing display:

Figure 2-1. The GNOME System Monitor Process Listing Display

Additional information can be displayed for a specific process by first clicking on the desired process and then clicking on the More Info button.

To view the CPU, memory, and disk usage statistics, click on the System Monitor tab.

3. vmstat

For a more concise view of system performance, try vmstat. Using this resource monitor, it is possible to get an overview of process, memory, swap, I/O, system, and CPU activity in one line of numbers:

   procs                      memory    swap          io     system         cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
 1  0  0      0 524684 155252 338068   0   0     1     6  111   114  10   3  87
        

The process-related fields are:

  • r — The number of runnable processes waiting for access to the CPU

  • b — The number of processes in an uninterruptible sleep state

  • w — The number of processes swapped out, but runnable

The memory-related fields are:

  • swpd — The amount of virtual memory used

  • free — The amount of free memory

  • buff — The amount of memory used for buffers

  • cache — The amount of memory used as page cache

The swap-related fields are:

  • si — The amount of memory swapped in from disk

  • so — The amount of memory swapped out to disk

The I/O-related fields are:

  • bi — Blocks sent to a block device

  • bo— Blocks received from a block device

The system-related fields are:

  • in — The number of interrupts per second

  • cs — The number of context switches per second

The CPU-related fields are:

  • us — The percentage of the time the CPU ran user-level code

  • sy — The percentage of the time the CPU ran system-level code

  • id — The percentage of the time the CPU was idle

When vmstat is run without any options, only one line is displayed. This line contains averages, calculated from the time the system was last booted.

However, most system administrators do not rely on the data in this line, as the time over which it was collected varies. Instead, most administrators take advantage of vmstat's ability to repetitively display resource utilization data at set intervals. For example, the command vmstat 1 displays one new line of utilization data every second, while the command vmstat 1 10 displays one new line per second, but only for the next ten seconds.

In the hands of an experienced administrator, vmstat can be used to quickly determine resource utilization and performance issues. But to gain more insight into those issues, a different kind of tool is required — a tool capable of more in-depth data collection and analysis.

4. The Sysstat Suite of Resource Monitoring Tools

While the previous tools may be helpful for gaining more insight into system performance over very short time frames, they are of little use beyond providing a snapshot of system resource utilization. In addition, there are aspects of system performance that cannot be easily monitored using such simplistic tools.

Therefore, a more sophisticated tool is necessary. Sysstat is such a tool.

Sysstat contains the following tools related to collecting I/O and CPU statistics:

iostat

Displays an overview of CPU utilization, along with I/O statistics for one or more disk drives.

mpstat

Displays more in-depth CPU statistics.

Sysstat also contains tools that collect system resource utilization data and create daily reports based on that data. These tools are:

sadc

Known as the system activity data collector, sadc collects system resource utilization information and writes it to a file.

sar

Producing reports from the files created by sadc, sar reports can be generated interactively or written to a file for more intensive analysis.

The following sections explore each of these tools in more detail.

4.1. The iostat command

The iostat command at its most basic provides an overview of CPU and disk I/O statistics:

Linux 2.4.18-18.8.0 (pigdog.example.com)     12/11/2002

avg-cpu:  %user   %nice    %sys   %idle
           6.11    2.56    2.15   89.18

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
dev3-0            1.68        15.69        22.42   31175836   44543290
          

Below the first line (which displays the system's kernel version and hostname, along with the current date), iostat displays an overview of the system's average CPU utilization since the last reboot. The CPU utilization report includes the following percentages:

  • Percentage of time spent in user mode (running applications, etc.)

  • Percentage of time spent in user mode (for processes that have altered their scheduling priority using nice(2))

  • Percentage of time spent in kernel mode

  • Percentage of time spent idle

Below the CPU utilization report is the device utilization report. This report contains one line for each active disk device on the system and includes the following information:

  • The device specification, displayed as dev<major-number>-sequence-number, where <major-number> is the device's major number[1], and <sequence-number> is a sequence number starting at zero.

  • The number of transfers (or I/O operations) per second.

  • The number of 512-byte blocks read per second.

  • The number of 512-byte blocks written per second.

  • The total number of 512-byte blocks read.

  • The total number of 512-byte block written.

This is just a sample of the information that can be obtained using iostat. For more information, see the iostat(1) man page.

4.2. The mpstat command

The mpstat command at first appears no different from the CPU utilization report produced by iostat:

Linux 2.4.18-14smp (pigdog.example.com)      12/11/2002

07:09:26 PM  CPU   %user   %nice %system   %idle    intr/s
07:09:26 PM  all    6.40    5.84    3.29   84.47    542.47
          

With the exception of an additional column showing the interrupts per second being handled by the CPU, there is no real difference. However, the situation changes if mpstat's -P ALL option is used:

Linux 2.4.18-14smp (pigdog.example.com)      12/11/2002

07:13:03 PM  CPU   %user   %nice %system   %idle    intr/s
07:13:03 PM  all    6.40    5.84    3.29   84.47    542.47
07:13:03 PM    0    6.36    5.80    3.29   84.54    542.47
07:13:03 PM    1    6.43    5.87    3.29   84.40    542.47
          

On multiprocessor systems, mpstat allows the utilization for each CPU to be viewed individually, making it possible to determine how effectively each CPU is being used.

4.3. The sadc command

As stated earlier, the sadc command collects system utilization data and writes it to a file for later analysis. By default, the data is written to files in the /var/log/sa/ directory. The files are named sa<dd>, where <dd> is the current day's two-digit date.

sadc is normally run by the sa1 script. This script is periodically invoked by cron via the file sysstat, which is located in /etc/crond.d. The sa1 script invokes sadc for a single one-second measuring interval. By default, cron runs sa1 every 10 minutes, adding the data collected during each interval to the current /var/log/sa/sa<dd> file.

4.4. The sar command

The sar command produces system utilization reports based on the data collected by sadc. As configured in Red Hat Linux, sar is automatically run to process the files automatically collected by sadc. The report files are written to /var/log/sa/ and are named sar<dd>, where <dd> is the two-digit representations of the previous day's two-digit date.

sar is normally run by the sa2 script. This script is periodically invoked by cron via the file sysstat, which is located in /etc/crond.d. By default, cron runs sa2 once a day at 23:53, allowing it to produce a report for the entire day's data.

4.4.1. Reading sar Reports

The format of a sar report produced by the default Red Hat Linux configuration consists of multiple sections, with each section containing a specific type of data, ordered by the time of day that the data was collected. Since sadc is configured to perform a one-second measurement interval every ten minutes, the default sar reports contain data in ten-minute increments, from 00:00 to 23:50[2].

Each section of the report starts with a heading that illustrates the data contained in the section. The heading is repeated at regular intervals throughout the section, making it easier to interpret the data while paging through the report. Each section ends with a line containing the average of the data reported in that section.

Here is a sample section sar report, with the data from 00:30 through 23:40 removed to save space:

00:00:01          CPU     %user     %nice   %system     %idle
00:10:00          all      6.39      1.96      0.66     90.98
00:20:01          all      1.61      3.16      1.09     94.14
…
23:50:01          all     44.07      0.02      0.77     55.14
Average:          all      5.80      4.99      2.87     86.34
            

In this section, CPU utilization information is displayed. This is very similar to the data displayed by iostat.

Other sections may have more than one line's worth of data per time, as shown by this section generated from CPU utilization data collected on a dual-processor system:

00:00:01          CPU     %user     %nice   %system     %idle
00:10:00            0      4.19      1.75      0.70     93.37
00:10:00            1      8.59      2.18      0.63     88.60
00:20:01            0      1.87      3.21      1.14     93.78
00:20:01            1      1.35      3.12      1.04     94.49
…
23:50:01            0     42.84      0.03      0.80     56.33
23:50:01            1     45.29      0.01      0.74     53.95
Average:            0      6.00      5.01      2.74     86.25
Average:            1      5.61      4.97      2.99     86.43
            

There are a total of seventeen different sections present in reports generated by the default Red Hat Linux sar configuration; many are discussing in upcoming chapters. For more information about the data contained in each section, see the sar(1) man page.

Notes

[1]

Device major numbers can be found by using ls -l to display the desired device file in /dev/. Here is sample output from ls -l /dev/hda:

brw-rw----    1 root     disk       3,   0 Aug 30 19:31 /dev/hda
                  

The major number in this example is 3, and appears between the file's group and its minor number.

[2]

Due to changing system loads, the actual time that the data was collected may vary by a second or two.

Posted by CEOinIRVINE
l