'System'에 해당되는 글 8건

  1. 2011.08.25 3.7. SYSTEM PERFORMANCE COUNTERS by CEOinIRVINE
  2. 2009.03.06 The U.S. Financial System Is Effectively Insolvent by CEOinIRVINE
  3. 2009.02.06 CIS benchmarks by CEOinIRVINE
  4. 2009.01.31 Which Operating System To Use? by CEOinIRVINE
  5. 2008.12.27 Why Tech Can't Cure Medical Inflation by CEOinIRVINE
  6. 2008.12.24 Life In A Recession by CEOinIRVINE
  7. 2008.12.13 Obama: Think Smart Cards by CEOinIRVINE
  8. 2008.11.10 Russia: Fire system caused 20 sub deaths by CEOinIRVINE



3.7. SYSTEM PERFORMANCE COUNTERS

The performance and scalability of a software system are determined by the various performance and scalability factors. Those factors that are affecting the performance and scalability of a software system most are classified as the bottlenecks. System performance counters help capture those bottlenecks.

All operating systems, whether it's Windows, UNIX, or Linux, have built-in system performance counters that can be used to monitor how a system is utilizing its resources. Based on the resource utilizations of a system, one can infer immediately what the system is doing and where the problem areas are. Capturing the system resource utilizations is one of the most fundamental tasks to be conducted for diagnosing software performance and scalability problems.

A performance counter enabled through a system monitoring tool is simply a logical entity that represents one of the aspects of a resource quantitatively. For example, one often needs to know:

  • How busy the CPUs of a system are

  • How much memory is being used by the application under test

  • How busy the disks of a data storage system are

  • How busy the networks are

System resource utilizations can be monitored in real time or collected into log files for later analysis. In this section, I describe how this can be done on Windows and UNIX platforms.

3.7.1. Windows Performance Console

On Windows-based computers, the performance monitoring utility program perfmon can be used to log performance counters. Since most developers and QA engineers might have not gotten a chance to get familiar with using perfmon, we spend a few minutes to show how to use it here.

Figure 3.29. Dialog box for starting up perfmon.

Figure 3.30. Windows performance console.

To start up perfmon, click on Start | All Programs | Run, and enter perfmon as shown in Figure 3.29.

Then click OK and you should see the Performance Console as shown in Figure 3.30.

The left-hand side of the Console shows two items, System Monitor and Performance Logs and Alerts. When the System Monitor is selected, the right-hand side frame displays the current readings of the added counters. At the bottom of the frame, added counters are shown. For example, Figure 3.30 shows that on the computer WHENRY-NB, the counter %Processor_Time of the Performance Object Processor was added to display CPU utilizations. The readings associated with this counter are: Last CPU utilization reading 53.125%, Average CPU utilization 41.359%, Minimum CPU utilization 3.906%, and Maximum CPU utilization 57.813%. This is how to tell how busy the CPUs of a system are.

It might be helpful at this point to get familiar with the above performance console. Placing the mouse pointer on an icon at the top of the right-hand side frame shows what that icon is for. Some of the examples include:

  • Clicking on the second icon would clear the display.

  • Clicking on the third icon would enable viewing current activities.

  • Clicking on the "+" icon would bring up the Add Counters dialog box for adding new counters to the monitoring list.

  • Clicking on the "x" icon would remove a counter from the current monitoring list.

  • Clicking on the light bulb icon would highlight the display for the counter selected currently.

  • Clicking/unclicking on the red-cross icon would freeze/unfreeze displaying the current activities.

Next, let's see how to add various performance counters. Figure 3.31 shows what Performance object to select, what Counters to select, and whether to select for All instances or only some specific instances.

After selecting Performance object, instances, and counters based on your needs, click Add to add the desired counters. Click Close to exit the Add Counters dialog box. If you want to know what a specific counter is for, select the counter you are interested in, then click Explain and you will get a fairly detailed description about that counter.

You can adjust the sampling interval by clicking on the Properties icon and then specify Sample automatically every n seconds, where n is the number of seconds you desire as the sampling interval. The default 1 second shown in Figure 3.32 is too fast and you can increase it based on how long your test would last.

Real-time display is meant for short test duration only, and also, you would lose the data after closing it. You can log the counters into a perfmon log file and analyze the logs afterwards.

Figure 3.31. Dialog box for adding perfmon counters.

Figure 3.32. Dialog box for entering perfmon sample interval.

To set up a perfmon logging task, follow this procedure:

  • Select Counter Logs under Performance Logs and Alerts, and right-click on Counter Logs to select the New Log Settings dialog box as shown in Figure 3.33.

  • Enter a name and click on OK, which would bring up the dialog box as shown in Figure 3.34.

  • From here you can add any counters you are interested in and specify a sampling interval. At the top, it shows the log file name, which will contain the performance log data for later offline analysis.

  • You can specify the log format under the Log Files tab, either Binary File or Text File (Comma delimited) for working with Excel to plot charts. Even if you select binary format now, you can re-save logged data in text format later. To change the log format from binary to text with a log file, first import the logged data in binary format, and then specify the time range, add the counters you are interested in, and then display the data. Right click anywhere on the display area and re-save data in text format.

  • You specify the schedules under the Schedule tab. You can select to manually start and stop or specify a logging duration to avoid logging too much unnecessary data even after a test is complete.

To analyze the perfmon log data, follow this procedure:

  • Select the System Monitor entry, and then click on the fourth icon of View Log Data, which should bring up the dialog box as shown in Figure 3.35.

  • Click on Add and then add the perfmon log file you want to analyze, which should bring up a dialog box similar to Figure 3.36.

  • Click on the Time Range button to display the time range for which the counters were logged. You can move the sliding bars to adjust the exact range you want. Keep in mind that the average value of a counter is based on the exact range you select, so you may want to adjust to the exact start and stop times of your test. You should keep a daily activity log that records the exact details of your test such as test start/stop time, all test conditions, and test results so that you can easily look back at exactly what you did with your previous test. This is a good habit to have as a software performance engineer.

  • Then click on the Data tab to get to the Add Counters dialog box. From there, first delete all counters and then select the counters you are interested in for analyzing your perfmon log data.

    Figure 3.33. Dialog box for naming a new perfmon log setting.
    Figure 3.34. Dialog box for configuring a new perfmon log setting.
    Figure 3.35. Dialog box for selecting the perfmon log file to be analyzed.
    Figure 3.36. Dialog box with a perfmon log file added.

This seems to be a little bit tedious but it helps you learn perfmon quickly without experimenting with it yourself. Initially, it might be difficult for you to decide what counters you should select out of the hundreds of built-in counters. To help you get started, Table 3.3 shows all common perfmon counters I typically use for diagnosing my performance issues. You can add more based on your special needs, but this list of counters should be sufficient in general.

Before moving on to the UNIX system performance counters, I'd like to share with you some techniques of using perfmon to diagnose common performance and scalability issues such as memory leaks, CPU bottlenecks, and disk I/O bottlenecks. Using perfmon to diagnose performance and scalability issues is a very important skill to acquire for testing the performance and scalability of a software system on the Windows platform, perfmon is intuitive, easy to learn, and very powerful for diagnosing performance and scalability issues on Windows. This is true not only for troubleshooting the performance and scalability problems you encounter with a complex, large-scale software system, but also for figuring out what's wrong when your desktop or laptop Windows system is too slow for you to bear with.

Let's start with using perfmon to diagnose memory leaks.

Table 3.3. A Minimum Set of Perfmon Counters to be Logged for Performance Tests in Windows Environment[]
Performance Object[Index Term: |Memory leaks:|perfmon, system performance counters][Index Term: |System performance counters|perfmon:|memory leak diagnosis]Performance Counters
Processor %ProcessorTime
System Processor Queue Length
Process %ProcessorTime
  Private Bytes
  Thread Count
  Virtual Bytes
  Working Set
Memory Available MBytes
  Page Reads/sec
  Page Writes/sec
Physical disk or logical disk %ldle Time (Note: Use 1 – %ldle for %Busy Time)
  Avg. Disk Read Queue Length
  Avg. Disk Write Queue Length
  Avg. Disk Bytes/Read
  Avg. Disk Bytes/Write
  Avg. Disk sec/Read
  Avg. Disk sec/Write
  Disk Read Bytes/sec
  Disk Write Bytes/sec
  Disk Bytes/sec
  Disk Reads/sec
  Disk Writes/sec
Network interface Bytes Received/sec
  Bytes Sent/sec
  Bytes Total/sec

[] Select instances that are pertinent to your tests.

3.7.2. Using perfmon to Diagnose Memory Leaks

The first chart I'd like to show is the memory growth chart, which might help you evaluate the memory leak issues associated with your application. Memory leak is a very common factor affecting the performance and scalability of a software system on Windows, especially with 32-bit Windows operating systems. It's one of the toughest issues in developing software, as most of the time, you know your software leaks memory, but it's hard to know where leaks come from, perfmon can only help diagnose whether you have memory leaks in your software; it doesn't tell you where the leaks come from. You have to use some other tools like Purify® to find and fix the memory leaks that your product suffers.

In a 32-bit environment, the maximum addressable memory space is 4 GB. On the Windows platform, this 4 GB is split between the kernel and a process. Although you can extend that 2-GB limit to 3 GB using a 3-GB switch parameter, that 3 GB may still not be enough for some applications with severe memory leak problems. So the best defense is to contain memory growth in your application. Otherwise, when that 2-GB limit is hit, your application will start to malfunction, which makes it totally unusable.

Figure 3.37. Memory growth associated with two processes of an application written in C/C + + in a Windows environment.

As a performance engineer, you are obligated to check memory growth with your software product by using a large volume of data. When you observe significant memory growth, you need to communicate it back to your development team so that they can fix it in time. Keep in mind that you need to make sure whether memory growth would come down after your test is complete. If it doesn't, it probably can be classified as memory leaks, which sounds more horrible than memory growth. There is also a likelihood that the memory growth you observe is actually memory fragmentation, which is related to how the operating system manages memory. Whether it is memory leak or memory fragmentation, they are equally bad as far as the consequences to the application are concerned.[Index Term: |Memory leaks:|perfmon, system performance counters][Index Term: |See also Application programming interface (API) profiling|system performance counters|perfmon:][Index Term: |System performance counters|perfmon:][Index Term: |System performance counters|perfmon:|CPU bottlenecks diagnosis]

Figure 3.37 shows memory growth with two processes of an application written in C/C++. The total test duration was about 24 hours. Note that private bytes curves are smoother than virtual bytes curves, which appear stair-cased. One should use private memory to evaluate actual physical memory consumption. It is seen that Process A is much more benign than Process B in terms of memory growth, as its private bytes curve is much flatter. Process B reached 320 MB at the end of the test, which means it might reach the 2-GB memory limit if the test lasts 5 days. From this test, it's clear that it's necessary to take some action against the memory growth for Process B.

In the next section, I'll discuss how to use perfmon to diagnose CPU bottlenecks.

3.7.3. Using perfmon to Diagnose CPU Bottlenecks

You can monitor the CPU utilizations of a Windows system using the performance object of Processor with the %Processor Time counter if you know you have only one major process such as a database server running on your system. If you have multiple processes running on your system, then use the Process performance object with the %Processor Time counter for the process instances you are concerned with. The %Processor Time counter for the Processor performance object measures the total average CPU utilization across multiple CPUs, whereas the %Processor Time counter for the Process performance object measures the accumulative CPU utilizations across multiple CPUs. So the maximum value is 100% for the former and N × 100% for the latter, where N is the number of total CPUs of an N-way Windows system. This is a subtle difference that must be accounted for when interpreting CPU utilizations.

Typically, an application might be deployed on multiple systems, for example, the application server on one physical box and the database server on another physical box. When the application is properly sized, and the application is well optimized and tuned, CPU utilizations across multiple systems should be well balanced to yield sustainable, maximum possible performance and scalability. Figure 3.38 shows such a balanced flow where the application server and database server were about equally utilized, yielding a high throughput of creating 127 objects/second. Over one million objects were created during a period of 2 hours and 11 minutes with the associated test run.

If you see the CPU utilization of the database server is going up while the CPU utilization of the application server is going down, then some tuning is required to bring both of them to a steady state. This phenomenon was called "bifurcating," which might be very common for applications that are not well tuned [Liu, 2006]. This is a good example that you should not just keep generating performance test numbers. You should examine utilizations of various resources to see if there are opportunities for improving the performance and scalability of your application as well.[Index Term: |System performance counters|perfmon:|disk I/O bottlenecks diagnosis]

Figure 3.38. CPU utilizations of two identical Intel Xeon systems on Windows 2003, one as the application server and the other as the database server.

The general criteria for defining CPU as the bottleneck on a computer system is that the average CPU utilizations are above 70% or the average processor queue length per CPU is above two. However, there might be a case where other resources, such as disks, may become the bottleneck before the CPU does. This is especially true with database-intensive software applications. Let's look at such a scenario next.

3.7.4. Using perfmon to Diagnose Disk I/O Bottlenecks

In this section, I'd like to share with you a chart that shows disk activities. It is very important to make sure that your disk I/O is not the bottleneck if your application is database intensive.

perfmon provides a sufficient number of counters associated with disk activities. However, very often, you may find that the %Disk Time counter may give you some bogus numbers exceeding 100%. As aworkaround, use 100 — %Idle Time to calculate the disk %Busy Time, which is equivalent to the average utilization for CPUs. Figure 3.39 shows the average disk utilizations calculated using 100 — %Idle Time for that one million object creation batch job discussed in the preceding section. The database storage used for this test was an internal RAID 0 configuration stripped across three physical disks.

Unlike CPUs, a disk utilization level of above 20% starts to indicate that I/O is the bottleneck, whereas for CPUs the threshold is about 70%. This disparity between disks and CPUs is due to the fact that CPUs in general can crank much faster than disks can spin.

Figure 3.39. Average disk utilizations.

Figure 3.40. Average disk read queue length and write queue length.

Exploring disk activities is a lot more interesting than exploring CPU activities, as we can dig deeper into more metrics such as average (read | write) queue length, average (reads | writes) / sec, average disk sec / (read | write), and disk (read | write) / sec. Let's explore each of these disk activity metrics.

Figure 3.40 shows the average disk read queue length and average disk write queue length recorded during that one million object creation batch job. It is seen that the write queue length is much larger than the read queue length, which implies that a lot more disk write activities occurred than read activities. This is not surprising at all, as during this batch job test, one million objects were created and persisted to the disks, which inevitably incurred a lot more disk writes than reads.

Queue length is a measure of the number of items waiting in a queue to be processed. According to queuing theory, which will be introduced in a later chapter of this book, a resource is considered a bottleneck if its queue length is larger than 2. As we introduced earlier, the database storage used for this test was an internal RAID 0 configuration stripped across three physical disks, which would push the queue length threshold to 6. It's clear from Figure 3.40 that the write queue length was around 20, which had far exceeded the threshold value of 6. This implies that a more capable storage system would help improve the performance and scalability of this batch job further.

Figure 3.41 shows the average number of reads and writes per second that occurred during this test. There were about 300 writes/second and 50 reads/second, which once more confirmed that more writes than reads occurred during the test period for the one million object creation batch job. Remember that the throughput for this batch job was 127 CIs/s, which implies that about 2 to 3 writes occurred per object creation on average. This seems to be normal for most database write-intensive applications.

Figure 3.41. Average number of reads and writes per second that occurred during the one million object creation batch job.

In addition to knowing the disk queue lengths and I/O rates associated with a test, it's also insightful to know how long it takes on average per read and per write. Normally, disk times should range from 5 milliseconds to 20 milliseconds with normal I/O loads. You may get submillisecond disk times if the database storage has a huge cache, for example, from a few gigabytes to tens of gigabytes.

For this test, each disk has only a 256-MB cache, so we would expect disk read and write times to be well above 1 millisecond. Actual disk read and write times associated with this test are shown in Figure 3.42. As is seen, the average disk write time is much longer than the average disk read time, as we already know from the previous analysis that there were a lot more requests accumulated up in the write queue than in the read queue. You have confidence when all metrics are consistent with each other.

Charts are very useful for qualitatively characterizing each performance factor. However, they are less precise for quantifying each performance factor. To get more quantitative, you can use the View Report functionality of perfmon to obtain the average value of each performance counter, such as shown in Figure 3.43 with the following quantitative values for some of the indicative disk performance counters:

  • Average disk utilization: 60%

  • Average disk time per read: 11 milliseconds

  • Average disk time per write: 73 milliseconds

  • Average disk write queue length: 22

    Figure 3.42. Average disk read time and average disk write time with the one million object creation batch job. Note that avg. disk sec / (read \ write) are perfmon counter names and the actual units are in milliseconds.
  • Disk reads/sec: 46

  • Disk writes/sec: 297

Keep in mind that you need to narrow the time range of your perfmon log data down to the exact range corresponding to the start and end times of your test; otherwise, the averaged values won't be accurate.

Performance Console allows you to monitor system resource utilizations over an extended period of time when the test is running. It's convenient for post-testing performance and scalability analysis. However, sometimes, you may want to use another Windows utility tool—Task Manager—for checking the resource consumption on the current Windows system. This is the topic for the next section.

Figure 3.43. Perfmon report.

3.7.5. Using Task Manager to Diagnose System Bottlenecks

We'll see in this section that Task Manager is more convenient than perfmon for some tasks. For example:

  • You may want to have a quick look at how busy the CPUs of a system are overall right now.

  • You may want to check how well balanced the CPU utilizations are across multiple CPUs of the system. This actually is an easy way to tell whether the software is running in multithreaded mode by examining whether all CPUs are about equally busy simultaneously.

  • You may want to check which processes are consuming most of the CPU power on this system right now.

  • You may want to check how much memory is used right now. And you can drill down to which processes are consuming most of the memory.

  • You may want to check the network utilization right now.

  • You can even see in real time if memory is leaking or not. If you see the memory consumption of a process is only going up, then there are probably memory leaks with that process.

First, to start up Task Manager, press CTRL + ALT + DELETE and you should get a dialog box similar to Figure 3.44.

As shown in Figure 3.44 under Performance tab, this system has two CPUs and both of them were busy, which means that the application is a multithreaded application. It also shows that a total memory of 374 MB was used up to that moment.

You can check the network utilizations by clicking on the Network tab, and check the users currently logged in by clicking the Users tab. But the most important tab for troubleshooting a performance issue with a system is the Process tab.

Computer programs run on a computer system as processes. Each process has an ID and a name to identify itself on the system it is running. By clicking on the Process tab on the Windows Task Manager dialog box, you can bring up a list of processes that are running currently on the system, as shown in Figure 3.45.

A few notes about this Process tab:

  • You may want to check the box of Show processes from all users at the left bottom corner of the screenshot in Figure 3.45 in order to see the processes that you are looking for.

  • You can't see your processes unless they are running right now.

  • You can sort by CPU Usage or Memory Usage to look for the most CPU-intensive or memory-intensive processes running on this system right now.

  • You can decide on what metrics you want to be displayed by clicking on the View | Select Columns ... which would bring up the list of metrics you can select, such as shown in Figure 3.46.

As you can see from the screenshot in Figure 3.46, you can select the Memory Usage, Memory Usage Delta, and Peak Memory Usage from the view options made available. These counters give a complete view of the process memory consumption. When a memory-intensive application is running, you will see the memory usage for that process keeps growing with more positive memory usage deltas than negative ones. If the memory usage doesn't come down after the process completed its task and is waiting for more new tasks, that's an indication that there is a memory leak issue with that process.

Figure 3.44. Windows Task Manager.

This concludes our discussion on performance counters on Windows systems. Most software development work is done on Windows, which is why we covered more topics on Windows.

However, for enterprise software applications, UNIX or Linux platforms are the most likely choice for some customers, so you might need to test out your software on these platforms as well. Instead of repeating what is already available in many UNIX/Linux texts, in the next section, I'll show you a simple script that can be used to capture the CPU and memory consumptions for the processes that you are concerned with. This probably is sufficient for most of your performance test needs. In a production environment, UNIX/Linux systems are typically managed by professional administrators who have developed special ways of capturing various system performance counters or simply use tools provided by vendors. That is beyond the scope of this book.

Figure 3.45. Process view from Windows Task Manager.

Figure 3.46. Select columns for process view.

3.7.6. UNIX Platforms

On UNIX and Linux systems, vendors provide their own system performance monitoring tools, although some common utilities such as sar might be available on all specially flavored platforms.

Performance troubleshooting often requires monitoring resource utilizations on a per-process basis. This might be a little bit more challenging on UNIX systems than on Windows systems. On Windows, you use perfmon to configure which processes and what counters you want to monitor. On UNIX systems, you need a script to do the same job. Here, I'd like to share one script that I often use on a specially flavored, popular UNIX platform for capturing CPU and memory utilizations when my tests were running. Since it's written in bash shell, it could be run on other UNIX and Linux systems as well.

Here is the script that you can adapt to your needs for monitoring systems resource usages on a per-process basis in your UNIX or Linux environment:

#!/bin/bash
sleepTime=60
pattern="yourPattern"
x=0
procId=$ (ps -eo pid,pcpu,time,args | grep $pattern |\
    grep -v grep | awk '{print $1}')
echo "prodd=" $proId
while [$x -ge 0]
do
 date=$(date)
 ps0=$(ps -o vsz,rss,pcpu -p $procId)
 x=$((x+l))
 echo $x $date $ps0 | awk >{print $1, $3, $4, $5, $8,\
    $11/1000.0, $9, $12/1000.0, $10, $13, $14}'
 sleep $sleepTime
done

As you see, this is a bash script. You need to specify how often you want to sample (sleepTime) and enter a string (pattern) that represents your process. Then you extract the process ID of that process. Using that process ID, you keep polling in an infinite loop for the counters you want to record. In this script, I was most interested in three counters, vsz, rss, and pcpu, which represents the virtual memory size, resident memory size, and CPU usage associated with that process. The counters vsz and rss are equivalent to the virtual bytes and private bytes counters of perfmon on Windows, respectively. These counters are very useful for monitoring memory growth associated with a process.

To execute this script, change to the bash shell environment and issue the following command:

prompt> ./scriptFileName > filename.txt &

The output is directed to a text file that you can analyze later. The text file is formatted with comma, so you can import it into an Excel spreadsheet to draw the charts you are interested in. Remember that this is an infinite loop, so you need to bring it to the foreground using the command fg and stop it after you are done.

If you cannot run this script, you might need to execute the command chmod 700 scriptFileName to set proper permissions.

Also, this simple script can be modified to monitor multiple processes in a single script.

In the next section, I'll propose several software performance data principles to help enforce the notion that software performance and scalability testing is not just about gathering data. It's about getting data that has value for your company and your customers.



'IT' 카테고리의 다른 글

2011-08-25 Date Calculations  (0) 2011.08.26
mysql  (0) 2011.08.25
Tomcat root changes  (0) 2011.08.24
How to disable your iPhone's creepy tracking feature  (1) 2011.04.22
Outlook 전자 메일에서 첨부 파일 차단  (0) 2011.04.21
Posted by CEOinIRVINE
l

For those who argue that the rate of growth of economic activity is turning positive--that economies are contracting but at a slower rate than in the fourth quarter of 2008--the latest data don't confirm this relative optimism. In 2008's fourth quarter, gross domestic product fell by about 6% in the U.S., 6% in the euro zone, 8% in Germany, 12% in Japan, 16% in Singapore and 20% in South Korea. So things are even more awful in Europe and Asia than in the U.S.

There is, in fact, a rising risk of a global L-shaped depression that would be even worse than the current, painful U-shaped global recession. Here's why:

First, note that most indicators suggest that the second derivative of economic activity is still sharply negative in Europe and Japan and close to negative in the U.S. and China. Some signals that the second derivative was turning positive for the U.S. and China turned out to be fake starts. For the U.S., the Empire State and Philly Fed indexes of manufacturing are still in free fall; initial claims for unemployment benefits are up to scary levels, suggesting accelerating job losses; and January's sales increase is a fluke--more of a rebound from a very depressed December, after aggressive post-holiday sales, than a sustainable recovery.

For China, the growth of credit is only driven by firms borrowing cheap to invest in higher-returning deposits, not to invest, and steel prices in China have resumed their sharp fall. The more scary data are those for trade flows in Asia, with exports falling by about 40% to 50% in Japan, Taiwan and Korea.

Even correcting for the effect of the Chinese New Year, exports and imports are sharply down in China, with imports falling (-40%) more than exports. This is a scary signal, as Chinese imports are mostly raw materials and intermediate inputs. So while Chinese exports have fallen so far less than in the rest of Asia, they may fall much more sharply in the months ahead, as signaled by the free fall in imports.

With economic activity contracting in 2009's first quarter at the same rate as in 2008's fourth quarter, a nasty U-shaped recession could turn into a more severe L-shaped near-depression (or stag-deflation). The scale and speed of synchronized global economic contraction is really unprecedented (at least since the Great Depression), with a free fall of GDP, income, consumption, industrial production, employment, exports, imports, residential investment and, more ominously, capital expenditures around the world. And now many emerging-market economies are on the verge of a fully fledged financial crisis, starting with emerging Europe.

Fiscal and monetary stimulus is becoming more aggressive in the U.S. and China, and less so in the euro zone and Japan, where policymakers are frozen and behind the curve. But such stimulus is unlikely to lead to a sustained economic recovery. Monetary easing--even unorthodox--is like pushing on a string when (1) the problems of the economy are of insolvency/credit rather than just illiquidity; (2) there is a global glut of capacity (housing, autos and consumer durables and massive excess capacity, because of years of overinvestment by China, Asia and other emerging markets), while strapped firms and households don't react to lower interest rates, as it takes years to work out this glut; (3) deflation keeps real policy rates high and rising while nominal policy rates are close to zero; and (4) high yield spreads are still 2,000 basis points relative to safe Treasuries in spite of zero policy rates.

Fiscal policy in the U.S. and China also has its limits. Of the $800 billion of the U.S. fiscal stimulus, only $200 billion will be spent in 2009, with most of it being backloaded to 2010 and later. And of this $200 billion, half is tax cuts that will be mostly saved rather than spent, as households are worried about jobs and paying their credit card and mortgage bills. (Of last year's $100 billion tax cut, only 30% was spent and the rest saved.)

Thus, given the collapse of five out of six components of aggregate demand (consumption, residential investment, capital expenditure in the corporate sector, business inventories and exports), the stimulus from government spending will be puny this year.

Chinese fiscal stimulus will also provide much less bang for the headline buck ($480 billion). For one thing, you have an economy radically dependent on trade: a trade surplus of 12% of GDP, exports above 40% of GDP, and most investment (that is almost 50% of GDP) going to the production of more capacity/machinery to produce more exportable goods. The rest of investment is in residential construction (now falling sharply following the bursting of the Chinese housing bubble) and infrastructure investment (the only component of investment that is rising).

With massive excess capacity in the industrial/manufacturing sector and thousands of firms shutting down, why would private and state-owned firms invest more, even if interest rates are lower and credit is cheaper? Forcing state-owned banks and firms to, respectively, lend and spend/invest more will only increase the size of nonperforming loans and the amount of excess capacity. And with most economic activity and fiscal stimulus being capital- rather than labor-intensive, the drag on job creation will continue.

So without a recovery in the U.S. and global economy, there cannot be a sustainable recovery of Chinese growth. And with the U.S, recovery requiring lower consumption, higher private savings and lower trade deficits, a U.S. recovery requires China's and other surplus countries' (Japan, Germany, etc.) growth to depend more on domestic demand and less on net exports. But domestic-demand growth is anemic in surplus countries for cyclical and structural reasons. So a recovery of the global economy cannot occur without a rapid and orderly adjustment of global current account imbalances.

Meanwhile, the adjustment of U.S. consumption and savings is continuing. The January personal spending numbers were up for one month (a temporary fluke driven by transient factors), and personal savings were up to 5%. But that increase in savings is only illusory. There is a difference between the national income account (NIA) definition of household savings (disposable income minus consumption spending) and the economic definitions of savings as the change in wealth/net worth: savings as the change in wealth is equal to the NIA definition of savings plus capital gains/losses on the value of existing wealth (financial assets and real assets such as housing wealth).

In the years when stock markets and home values were going up, the apologists for the sharp rise in consumption and measured fall in savings were arguing that the measured savings were distorted downward by failing to account for the change in net worth due to the rise in home prices and the stock markets.

But now with stock prices down over 50% from peak and home prices down 25% from peak (and still to fall another 20%), the destruction of household net worth has become dramatic. Thus, correcting for the fall in net worth, personal savings is not 5%, as the official NIA definition suggests, but rather sharply negative.

In other terms, given the massive destruction of household wealth/net worth since 2006-07, the NIA measure of savings will have to increase much more sharply than has currently occurred to restore households' severely damaged balance sheets. Thus, the contraction of real consumption will have to continue for years to come before the adjustment is completed.

In the meanwhile the Dow Jones industrial average is down today below 7,000, and U.S. equity indexes are 20% down from the beginning of the year. I argued in early January that the 25% stock market rally from late November to the year's end was another bear market suckers' rally that would fizzle out completely once an onslaught of worse than expected macro and earnings news, and worse than expected financial shocks, occurs. And the same factors will put further downward pressures on U.S. and global equities for the rest of the year, as the recession will continue into 2010, if not longer (a rising risk of an L-shaped near-depression).

Of course, you cannot rule out another bear market suckers' rally in 2009, most likely in the second or third quarters. The drivers of this rally will be the improvement in second derivatives of economic growth and activity in the U.S. and China that the policy stimulus will provide on a temporary basis. But after the effects of a tax cut fizzle out in late summer, and after the shovel-ready infrastructure projects are done, the policy stimulus will slacken by the fourth quarter, as most infrastructure projects take years to be started, let alone finished.

Similarly in China, the fiscal stimulus will provide a fake boost to non-tradable productive activities while the traded sector and manufacturing continue to contract. But given the severity of macro, household, financial-firm and corporate imbalances in the U.S. and around the world, this second- or third-quarter suckers' market rally will fizzle out later in the year, like the previous five ones in the last 12 months.

In the meantime, the massacre in financial markets and among financial firms is continuing. The debate on "bank nationalization" is borderline surreal, with the U.S. government having already committed--between guarantees, investment, recapitalization and liquidity provision--about $9 trillion of government financial resources to the financial system (and having already spent $2 trillion of this staggering $9 trillion figure).

Thus, the U.S. financial system is de facto nationalized, as the Federal Reserve has become the lender of first and only resort rather than the lender of last resort, and the U.S. Treasury is the spender and guarantor of first and only resort. The only issue is whether banks and financial institutions should also be nationalized de jure.

But even in this case, the distinction is only between partial nationalization and full nationalization: With 36% (and soon to be larger) ownership of Citi (nyse: C - news - people ), the U.S. government is already the largest shareholder there. So what is the non-sense about not nationalizing banks? Citi is already effectively partially nationalized; the only issue is whether it should be fully nationalized.

Ditto for AIG (nyse: AIG - news - people ), which lost $62 billion in the fourth quarter and $99 billion in all of 2008 and is already 80% government-owned. With such staggering losses, it should be formally 100% government-owned. And now the Fed and Treasury commitments of public resources to the bailout of the shareholders and creditors of AIG have gone from $80 billion to $162 billion.

Given that common shareholders of AIG are already effectively wiped out (the stock has become a penny stock), the bailout of AIG is a bailout of the creditors of AIG that would now be insolvent without such a bailout. AIG sold over $500 billion of toxic credit default swap protection, and the counter-parties of this toxic insurance are major U.S. broker-dealers and banks.

News and banks analysts' reports suggested that Goldman Sachs (nyse: GS - news - people ) got about $25 billion of the government bailout of AIG and that Merrill Lynch was the second largest benefactor of the government largesse. These are educated guesses, as the government is hiding the counter-party benefactors of the AIG bailout. (Maybe Bloomberg should sue the Fed and Treasury again to have them disclose this information.)

But some things are known: Goldman's Lloyd Blankfein was the only CEO of a Wall Street firm who was present at the New York Fed meeting when the AIG bailout was discussed. So let us not kid each other: The $162 billion bailout of AIG is a nontransparent, opaque and shady bailout of the AIG counter-parties: Goldman Sachs, Merrill Lynch and other domestic and foreign financial institutions.

So for the Treasury to hide behind the "systemic risk" excuse to fork out another $30 billion to AIG is a polite way to say that without such a bailout (and another half-dozen government bailout programs such as TAF, TSLF, PDCF, TARP, TALF and a program that allowed $170 billion of additional debt borrowing by banks and other broker-dealers, with a full government guarantee), Goldman Sachs and every other broker-dealer and major U.S. bank would already be fully insolvent today.

And even with the $2 trillion of government support, most of these financial institutions are insolvent, as delinquency and charge-off rates are now rising at a rate--given the macro outlook--that means expected credit losses for U.S. financial firms will peak at $3.6 trillion. So, in simple words, the U.S. financial system is effectively insolvent.

Nouriel Roubini, a professor at the Stern Business School at New York University and chairman of Roubini Global Economics, is a weekly columnist for Forbes.com.





'Business' 카테고리의 다른 글

At MaxMara  (0) 2009.03.06
Dead End For General Motors?  (0) 2009.03.06
Review: Kindle e-book reader comes to the iPhone  (0) 2009.03.05
Italy's Catholics urged to go on high-tech fast  (0) 2009.03.05
Amazon Kindles Interest In Content  (0) 2009.03.05
Posted by CEOinIRVINE
l

CIS benchmarks

Hacking 2009. 2. 6. 09:19

'Hacking' 카테고리의 다른 글

How to be penetration tester? (Computer Security Specialist?)  (0) 2009.02.08
XSS Cheat Sheet  (0) 2009.02.06
Below is a list of resources you've selected:  (0) 2009.02.06
Security Metrics  (0) 2009.02.06
CIS BenchMark  (0) 2009.02.06
Posted by CEOinIRVINE
l

Which Operating System To Use?

IT 2009. 1. 31. 04:01

Is it safe to embrace Microsoft's Vista operating system, or should you wait for Windows 7? Let me try to sort this out for you.

Released two years late and with many highly touted features removed, Vista has long been dogged with reports of a toxic brew of compatibility problems, sluggish performance and a plethora of annoying tics and habits. And now the Windows 7 beta has arrived, and many businesses--including bMighty parent company TechWeb--are asking themselves whether they can hang on to Windows XP until Windows 7 arrives in a year or so.


Nevertheless, there I was in a generic Office/Best Depot/Max/Buy store buying a Vista-powered HP Pavilion laptop to use as my primary home computer. Why?

'IT' 카테고리의 다른 글

Amazon New Kindle : Resurvival of publishing industry?  (0) 2009.02.09
Guitar Hero  (0) 2009.02.07
Bringing Microsoft To VMware  (0) 2009.01.29
Boom Times Over For Shell  (0) 2009.01.29
AT&T's Signal Could Weaken  (0) 2009.01.29
Posted by CEOinIRVINE
l

Why Tech Can't Cure Medical Inflation

Lee Gomes, 12.18.08, 06:00 PM EST
Forbes Magazine dated January 12, 2009

Computers in medicine aren't a cure. They might even make the system sicker.

pic

Whenever President-elect Obama is asked how he'll pay for his ambitious health care reform plans, he invariably talks about the $80 billion in annual savings he'll get from bringing computerized recordkeeping to doctors' offices and hospitals.

If only that were true. While there are benefits that might be had from using computers more widely in medicine, doing so won't save us any money and, in fact, will likely make things more expensive. There's even a chance that the quality of care might get worse along the way.

That's probably counterintuitive to anyone contemplating the wall of file drawers in a typical doctor's office. Medicine clearly has yet to join the rest of the world in going digital; no wonder, the thought goes, that U.S. health care is so expensive.

But while paper records certainly have their inconveniences--filling out your thousandth questionnaire, say--they play a very minor role in galloping health care inflation.

Instead, the heart of the problem is the U.S. fee-for-service system, in which doctors get paid to do things to people. The more technical and invasive the procedure, the more money they make. Doctors have responded in the expected Pavlovian manner, collectively shifting away from basic primary care toward expensive specializations that run up costs without necessarily improving medical outcomes.

As any chief information officer can tell you, adding computers to this sort of inefficient process only makes the inefficiency happen more quickly.

Much of what doctors or policymakers know about technology comes from vendors, who are busy guilt-tripping the medical sector about being slow to get with it. But more quietly, health care economists have been studying the actual impact of these systems. Their findings should disturb those who look to information technology for an easy fix.


Posted by CEOinIRVINE
l

Life In A Recession

Business 2008. 12. 24. 03:14

Some people say recessions are inevitable; others say they are healthy, necessary to clean out the system and clear the way for the next expansion. Finally, while many blame greedy capitalists for pushing things too far, there are some who believe that the current recession is something we deserved (or earned) because so many lived beyond their means.

No matter what you believe, recessions are never fun. Beneath all the statistics and data are real people facing real challenges. The unemployment rate, now 6.7%, is headed to about 8% by late 2009. In the fourth quarter, real gross domestic product will drop the most since the brutal recession of 1981-1982, when, over the course of only two years, Paul Volcker reversed 20 years of inflationary monetary policy.

But it is not just the speed of the collapse that is so scary; it is that our current generation has little experience with economic pain. Between 1965 and 1982, the U.S. economy was in recession one out of every three years, inflation hit double digits and the unemployment rate peaked at 10.8%.

Since 1982, the U.S. has been in recession just one out of 16 years, the unemployment rate bottomed at 3.8% in early 2000 and then at 4.4% in early 2007. In other words, a wobbly economy today feels much worse to the average American and politician than it did 30 years ago.

So we have a real schizophrenia today. People are going to the mall for holiday shopping, parking hundreds of yards away and waiting in long lines to check out. But then these same people go to parties and argue about whether the Obama economic stimulus plan should be $500 billion or $1 trillion. It feels so bad that President Bush is justifying his economic intervention by saying that "I've abandoned free-market principles to save the free-market system."

What's important to recognize is that even at the bottom of the current recession, sometime in mid-2009, the living standards of the typical American will still be amazingly high. In fact, even an aggressive contraction in real GDP will leave per-capita real GDP above 2005 levels.

Now, we did not have 8% unemployment back in 2005, but that kind of jobless rate is not unusual for recessions. The unemployment rate peaked at only 6.3% in the recession early this decade but peaked at 7.8%, 10.8%, 7.8%, and 9% in each of the previous four recessions, respectively, dating all the way back to the 1973-1975 recession.


'Business' 카테고리의 다른 글

Play Clean With Wash Sale Rule  (0) 2008.12.24
Stumbling Giants: EA And Take-Two  (0) 2008.12.24
Company of the Year: Nasdaq  (0) 2008.12.24
No Happy Holidays For U.S. Housing  (0) 2008.12.24
Smart Tax Moves To Make Right Now  (0) 2008.12.22
Posted by CEOinIRVINE
l

Obama: Think Smart Cards

Business 2008. 12. 13. 09:18
pic
More By This Author

Barack Obama has announced the single largest new investment in the nation's infrastructure since the creation of the interstate highway system in the 1950s under Eisenhower. Speculation begins to build up about the precise nature of this investment.

I have been in Singapore for the last two weeks and have been observing how this tiny country has created a superbly modern infrastructure that flows seamlessly by leveraging technology and process automation.

From the minute I walked through immigration, I began noticing the country's well-conceived mechanisms for efficiency enhancement. Singapore residents have a special smart card that lets them clear immigration without human intervention. Taxis link up via transponders to a central system through which the country implements congestion control, including peak hour and business district surcharges.

As I have watched the city in motion during my stay, it has made me think about the possibilities for infrastructure modernization in the U.S., now that we're embarking on a new era. The problems--health care, energy, traffic congestion, education, poverty and security--each have major implications when you apply smart-card-based process control in the Singaporean way.

Dominique Trempont, former CEO of smart-card firm Gemplus Corp. (now part of Gemalto), believes that the U.S. should roll out one multi-application smart card to the entire population in order to automate various government and private-sector functions. "The card can be partitioned into application segments, and the companies rolling out applications on it can pay for the privilege," Trempont says.

Real-Time Quotes
12/12/2008 4:01PM ET
  • MA
  • $138.81
  • 0.33%
  • AXP
  • $20.34
  • 1.04%

The first application category for a smart card is a government-owned, centralized patient record database that then becomes the heart of the U.S. health care system. A patient goes to a new doctor, and the doctor's office can access the records with the card, without the hassle of gratuitous paperwork handling by multiple office administrators and frustration on the part of the patient. Insurance claims and processing could also be integrated with this central system, closing the loop with the doctor's office and the insurance company.

A second application category could belong in the realm of security and identity. Passports and driver's licenses could be implemented on the smart card: It can enable a smooth transition through immigration and other functions, such as traffic management. After all, why do we need cops to monitor whether drivers are staying within the speed limit? If there is scientific evidence that the most energy-efficient speed at which cars should be driven is 60 mph, then drivers should pay for driving above that speed limit. Fines can be automatically charged on a smart card. Congestion-control applications can also be implemented on the same infrastructure based on time, geographical zoning, vehicle type (with incentives for fuel-efficient cars and penalties for gas guzzlers), etc.

"Not only is a smart-card-based infrastructure great for efficiency enhancement, it can be a major revenue generator," Trempont says. No kidding! If every car that drives above 60 mph is charged a fine, and there were an efficient way of collecting congestion taxes, that revenue alone could be enough to finance the $136 billion that the nation's governors need for infrastructure projects related to roads, bridges and railway. It will also generate ongoing revenue for years to come that can pay for many more ambitious projects.


'Business' 카테고리의 다른 글

Steve Jobs' Greatest Surprises  (0) 2008.12.14
New Bubble, Same Old Frauds  (0) 2008.12.14
How To Survive Your Office Party  (0) 2008.12.13
Green Jobs' False Promise?  (0) 2008.12.13
How Unions Stop The Cars  (0) 2008.12.13
Posted by CEOinIRVINE
l
aused by a malfunctioning fire safety system that spewed out chemicals, according to an initial investigation, officials said Sunday.
The submarine, believed to be called Nerpa, is seen heading to its base on Sunday in a Russian TV image.

The submarine, believed to be called Nerpa, is seen heading to its base on Sunday in a Russian TV image.

Click to view previous image
1 of 2
Click to view next image

At least 21 other people were injured during Saturday's test run in the Sea of Japan, the Russian Defense Ministry said.

It was Russia's worst naval accident since the nuclear submarine Kursk sank after an onboard torpedo explosion on August 12, 2000, killing all 118 crew members.

The latest fatal accident was the result of the "accidental launch of the fire-extinguishing system" on the Pacific Fleet sub, Russian navy spokesman Capt. Igor Dygalo told reporters.

Russian news agency Interfax said a preliminary forensic investigation found that the release of Freon gas following the activation of the fire extinguishing system may have caused the fatalities.

Seventeen of the fatalities were civilian members of the shipyard crew, Interfax reported. The submarine was being field tested before it became a official part of the navy, according to a Russian Defense Ministry statement.

The statement said 208 people, including 81 soldiers were on board the submarine. In addition to the fatalities, the accident wounded 21, Russian officials said.

The accident did not damage the nuclear reactor on the submarine which later traveled back to its base on Russia's Pacific coast under its own power, Dygalo added.

The submarine returned to Bolshoi Kamen, a military shipyard and a navy base near Vladivostok, state-run Rossiya television said, according to The Associated Press.

Officials did not reveal the name of the submarine, but Russian news agencies quoted officials at the Amur Shipbuilding Factory who said the submarine was built there and is called the Nerpa.

Construction of the Nerpa, an Akula II class attack submarine, started in 1991 but due to a shortage of funding was suspended for several years, the reports said. Testing on the submarine began last month and it submerged for the first time last week.

The Kremlin is seeking to restore Russia's military power amid strained ties with the West following the war with Georgia.

But despite former President Vladimir Putin increasing military spending, Russia's military remains hampered by decrepit infrastructure and aging weapons.

The Kremlin said President Dmitry Medvedev was told about the accident immediately and ordered a thorough investigation.



Posted by CEOinIRVINE
l