Basic SSH server hardening

When discussing some of my recent findings with Kippo I’ve been asked a few times for suggestions for how people can prevent their systems from being compromised via this vector. A quick Google search shows that there are already a number of good resources covering the options, including: Debian Administration Article and Securing Debian Manual. However, the high number of options can leave people unsure where to start so I’ll summarise some of those that are more common and can provide the highest return on investment for the time taken to make the change.
N.B. a lot of the suggestions below are valid for most/all remote access functionality.
Restrict access from unknown locations
If possible (it isn’t always) restrict access to only come from known and trusted sources. This can be down at multiple choke points in the network and system; perimeter firewall, host firewall (iptables etc.) or sshd config. For working with sshd the /etc/hosts.allow and /etc/hosts.deny, for example:
/etc/hosts.allow

#Corporate HQ gateway
sshd: 1.2.3.4/255.255.255.255

/etc/hosts.deny

#Generic Deny All
sshd: ALL

It doesn’t matter how insecure your system is, if an attacker can’t connect and communicate with a vulnerable service they can’t exploit it, period.
Restrict remote root access
Preventing remote access to the root account can reduce the damage that can be caused by a compromised. With SSH this can be achieved with a single configuration line:
/etc/ssh/sshd_config

PermitRootLogin no

Only allow access to specific accounts
Does every account on you system need to be able to remotely access the system via SSH? No? Then why can it?
Remote system access can be restricted on a per user basis. This can be either as a whitelist using the AllowUsers directive or as a blacklist with the DenyUsers directive. For example, if I only wanted to allow my own account access via ssh:
/etc/ssh/sshd_config

AllowUsers andrew

These capabilities can be useful with certain honeypot systems; if you create a weak user account linked with an ftp or pop3 honeypot (for example), then the same weak accounts can be prevented from gaining access to a shell with the DenyUsers directive, limiting the weak account to only access those services that are being monitored.
Run on non-standard port
Yes, this is ‘security by obscurity’; if this is the only change you make you haven’t really improved security any, but it is still useful as part of wider security posture. Attackers are continually scanning the internet looking for new systems to exploit, currently the ISC statistics show connections to tcp22 at around 100k targets; even moving to a relatively common alternative port of 2222 drops the malicious traffic by around 90%.
/etc/ssh/sshd_config

Port 2222

This reduces the number of malcious attempts targeting the service, which will both reduce processor/network load and ‘noise’ in the log. If you now get a burst of failed log-in attempts in the logs, then this may be indicative of a specific attacker rather than just the usual background noise of bots and worms scanning for new victims.
Summary
Implementing the above can drastically improve SSH security above the defaults, with a relatively small effort required providing a great ROI. So what’s your excuse? Go harden that SSH installation
–Andrew Waite

Example of post exploit utilities (SSH scanners)

So far my Kippo honeypot installation has recieved a number of successful log ins from maliciuos users, some of which have been helpful enough to provide some tools for further analysis. A lot of the archives which have been downloaded show that the kits have been in use for a while, with some archive timestamps going back as far as 2004 (of course this could simply be an incorrect clock on the machine that created the archive). Picking on the most recent download (2010-07-18) I’ve taken a look at the archive containing gosh.tgz.
The archive was downloaded from linux<dot>hostse<dot>com<slash>gosh<tgz>, system is down at time of writing but take care if attempting to investigate yourself. Before downloading the user checked around the system with commands: w, uname -a and cat /proc/cpuinfo, and archive was downloaded and extracted in /dev/shm/.
Once extracted, the archive contains a number of files:

1: ISO-8859 English text, with CRLF line terminators
2: ASCII text
3: ASCII C++ program text, with CRLF line terminators
4: ASCII text
5: ASCII text
a: ISO-8859 text, with CRLF line terminators
common: ASCII C++ program text
gen-pass.sh: Bourne-Again shell script text executable
go.sh: ASCII text
mfu.txt: ASCII text
pass_file: ASCII text
pscan2: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped
scam: Bourne-Again shell script text executable
secure: Bourne-Again shell script text executable
ss: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, for GNU/Linux 2.0.0,stripped
ssh-scan: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, for GNU/Linux 2.0.0, stripped
vuln.txt: empty
  • Interesting files:
  • Files 1 to 5, common and pass_file are password lists, totalling 235,523 potential passwords.
  • mfu.txt is a list of IP addresses, mostly in the 38.99.0.0/16 address space.
  • pscan2 is a fairly common and generic port scanner.
  • scam is a shell script that appears to be the core brains of the toolkit. It essentially looks through scanning a different ranges of IP addresses while periodically emailing the contents of vuln.txt back to it’s master (mafia89tm@yahoo.co.uk).
  • ss: appears to be another scanner used for looking for potential targets.
  • ssh-scan: appears to be a Romanian tool from the message provided if run without arguments, according to Google Translate (possibly NSFW), and as you would guess from the file name is a scanner for SSH services.
  • vuln.txt is blank in the archive, and will be the output of vulnerable systems located by the scanners.

All told this appears to be a kit for performing further scans for unsecured SSH sessions, and it is likely that a similar kit hosted on a different compromised machine was responsible for identifying my installation in the first place. Kits like this also quickly show the problem with tracking down the malicious user behind an compromise or attempt, it is rare for attacks to be launched from systems that can easily be traced back to the malicious user.
A quick Google search confirms that this kit (and user) has been seen in the wild attacking other systems, this posting on the Shell Person blog writes up the aftermath after a production system was compromised by the same kit.
–Andrew Waite

Initial Kippo honeypot stats

I’ve been running Kippo for nearly two weeks now (decided to live dangerously and go with SVN version) and have seen some interesting results.
Top 10 most common passwords attempted:

  1. a (651)
  2. 123456 (495)
  3. password (331)
  4. 12345 (302)
  5. 123 (224)
  6. 1234 (169)
  7. 1 (139)
  8. 12 (123)
  9. root (105)
  10. test (46)

Select count(password), password
from auth
where password <> ”
group by password
order by count(password) desc
limit 10;

Top 10 most common username attempted:

  1. root (8510)
  2. admin (144)
  3. test (127)
  4. oracle (96)
  5. nagios (49)
  6. mysql (47)
  7. guest (43)
  8. info (42)
  9. user (41)
  10. postgres (40)

select count(username), username
from auth
where username <>”
group by username
order by count(username)
desc limit 10;

Success ratio:
17065 attempts, 48 successful connections. (n.b. results skewed as account has purposefully poor choice of password)

select count(success),success
from auth
group by success
order by success;

Number of connections per unique IP:

  1. 202.99.89.69 (5212)
  2. 200.61.189.164 (1752)
  3. 78.37.83.203 (1043)
  4. 218.108.235.86 (848)
  5. 195.14.50.8 (628)
  6. 218.80.200.138 (271)
  7. 58.222.200.226 (238)
  8. 58.18.172.206 (158)
  9. 119.188.7.174 (128)
  10. 119.42.148.10 (113)

select count(ip), ip
from sessions
group by ip
order by count(ip) desc;

Number of attempts were relatively low IP address, in total 194 different source locations have attempted to access the server, with each typically only making 4 attemtps.
Packages:
Once exploited a number of attackers have proceeded to download various rootkits and utilities (thanks for these). Nothing too interesting yet, standard rootkit functionality, IRC clients and SSH scanners for further compromise. I still need to analyse some of these in more detail, so watch your RSS feeds for more to come.
One malicious user also attempted to create new user accounts on the server, if you have an account called ‘iony’ with a password of ‘ionyszaa’ then you may want to remove it…
If you’ve got a spare machine and public IP address, give Kippo a shot, setup is realitively easy; I’ve seen some interesting malicious user sessions and it turns out that some of those ‘31337 haxxors’ that everyone fears really can’t type.
–Andrew Waite

Starting with Kippo (SSH Honeypot)

As I started life as a Linux server admin I’m only too aware that many attackers see remote access functionality as a way into a system, and as SSH is the de facto standard for Linux access it is a prime target for attack. The stats collected by DShield give an indication to the extent of the problem.
As a result I’ve had the Kippo honeypot is something that I’ve had on my radar for a while. For a number of reasons I hadn’t found time to implement the system in a live environment, but a recent post on the Diatel blog suggested that installation may be quick and pain free.
Kippo is described by it’s author (Desaster) as:

Kippo is a medium interaction SSH honeypot designed to log brute force attacks and, most importantly, the entire shell interaction performed by the attacker.
Kippo is inspired, but not based on Kojoney.

Installation for me was painless, running a Debian system I downloaded the latest archive to disk, unpacked and installed the pyton-twisted package (I hadn’t read Mig5’s comment until after install so now need to go back and live on the bleeding edge…). I did hit a couple of problems when trying to start up the system, which is as simple as invoking ./start.sh

  1. First, I was logged in as root when I first tried to start the system (not clever I know, was testing…). Kippo encounters an error when started by a root user. As Desaster rightly states, it’s not wise to run Kippo as a root user anyway and running as a regular user resolves the issue.
  2. Second, when running as a normal user I got a ‘meaningful’ error of “Failed to load application: ‘NoneType’ object has no attribute ‘get’.” A quick piece of Google-fu lead me to this ticket, which explained Kippo was missing the file kippo.cfg, as explained copying kippo.cfg.dist to kippo.cfg correct the issue and produced a fully functional system.

There are a couple of key files that can be edited to change the feel of the system that is provided to malicious users:

  • kippo.cfg contains runtime information including log location, fake hostname etc.
  • kippo.tac contains an array ‘users’, which lists the username/password combination which the emulated SSH login will accept as ‘valid’.
  • The honeyfs/ directory goes so far as to allow you to create a ‘real’ filesystem for the malicious user to interact with, potentially copying a live server’s filesystem to the directory to help camouflage the emulated system (after sensitive data is removed/sanitised obviously….). I haven’t tried this myself yet but is definitely on my to-do list.

From initial testing I’ve got high hopes for Kippo becoming a mainstay in my honeypot toolbox; the interaction session provided to a malicious user is reasonably convincing at first glance, and I particularly like the trick to keep users logged in after they think they’ve sent an ‘exit’ command to close the session, it could get some interesting results.
For post compromise analysis Kippo also provides some an interesting utility, utils/playlog.py. This allows you to replay a malicious terminal session in real-time, typos and all, to truely provide a feel for the malicious users interaction with the session. To help whet your apetite whilst I wait for someone to target my kippo installation, Kippo has a few demo’s of the playlog capabilities from compromise attempts. Get your demos here.
–Andrew Waite

InfoSec Triads: Cost/time/functionality

Triad: Cost, Functionality, Time
Triad: Cost, Functionality, Time

Following InfoSanity’s recent (and unexplainable) focus on triads in previous posts is the relationship between cost, time and functionality. For the purpose of this discussion assume a scenario for introducing a new product/service or adding new capabilities to an existing service.
In an ideal world all projects would have enough resources and realistic timescales to develop all required functionality to the highest level of quality. However in the real world this is rarely achievable when working with external constraints. Therefore in any project compromises are inevitable.
The theory stands that with a given set of resources, only a finite amount of functionality can be developed. Therefore it stands to reason that additional functionality can be added to a project by increasing the length of the project or adding additional resources (although the Mythical Man Month and Dilbert may refute this simplistic theory).
Within ever tightening economic conditions and competition, in order to reduce costs and/or development and implementation time functionality is stripped from the service. As security is often seen by wider business as a nicety rather than a necessity, security features are commonly the first to be dropped, or the security of features still implemented is reduced.
Despite what infosec professionals (myself included) may like to think, reducing security to meet business or market drivers isn’t necessarily a bad thing. Providing that the benefit gained is proportionate to the additional risks introduced, and those risks are acceptable to the business and/or client. However, in the increasing world of regulatory compliance this can provide a false economy to a business as it is almost universally more costly in implement additional security on-top of an existing solution than it is to bake the required security into the design and development phases.
And if anyone tells you that a less secure solution is temporary and will be rectified at a later date… Don’t believe them 🙂
— Andrew Waite

mimic-nepstats_v1-1.py

I’ve been a bit lax in writing this post; around a month ago Miguel Jacq got in contact to let me know about a couple of errors he encountered when running InfoSanity’s mimic-nepstats.py with a small data set. Basically if your log file did not include any submissions, or was for a period shorter than 24hours the script would crash out, not the biggest problem as most will be working with larger data sets but annoying non the less.
Not only did Miguel let me know about the issues, he was also gracious enough to provide a fix, the updated script can be found here. An example of the script in action is below:

cat /opt/dionaea/var/log/dionaea.log| python mimic-nepstats_v1-1.py
Statistics engine written by Andrew Waite – www.infosanity.co.uk
Number of submissions: 84
Number of unique samples: 39
Number of unique source IPs: 65
First sample seen: 2010-06-08 08:25:39.569003
Last sample seen: 2010-06-21 15:24:37.105594
System Uptime: 13 days, 6:58:57.536591
Average daily submissions: 6
Most recent submissions:
2010-06-21 15:24:37.105594, 113.37.56.28, emulate://, 56b8047f0f50238b62fa386ef109174e
2010-06-21 15:18:08.347568, 195.205.5.71, tftp://195.205.5.71/ssms.exe, fd28c5e1c38caa35bf5e1987e6167f4c
2010-06-21 15:17:08.391267, 195.117.74.62, tftp://195.117.74.62/ssms.exe, bb39f29fad85db12d9cf7195da0e1bfe
2010-06-21 06:29:03.565988, 195.160.222.101, tftp://195.160.222.101/ssms.exe, fd28c5e1c38caa35bf5e1987e6167f4c
2010-06-20 23:34:15.967299, 195.242.145.40, http://208.53.183.164/trying.exe, 094e2eae3644691711771699f4947536

— Andrew Waite

OWASP at Northumbria Uni – June 2010

June 16th marked the first time the Open Web Application Security Project’s (OWASP) Leeds/Northern Chapter ran an event at Northumbria University, meaning it was the first time I was able to attend. Jason Alexander started off events with a brief overview of OWASP and the projects the group is involved with.
ENISA Common Assurance Maturity Model (CAMM) Project
Colin Watson did a good job of explain the work he and others have been working on. The project have released two documents which Colid discussed, the Cloud Computing Risk Assessment[.pdf] and the Cloud Computing Information Assurance Framework[.pdf]. Don’t be put off by the focus on ‘Cloud’, whilst this was the focus and reasoning behind the work at the start of the project, the information and processes Colin describes could easily be related to any IT environment and at first glance seem to be well worth a read.
Open Source Security Myths
Next up David Anumudu gave a somewhat brave talk considering the audience discussing and (potentially) debunking the assumption that open source software is more secure than it’s closed source competitors. David picked on the now famouse phrase from The Cathedral and the Bazaar, ‘ Given enough eyeballs, all bugs are shallow’. David argues that while this is true and reasonable, it only works in practice if all the eyeballs have both the incentive and the skills to effectively audit the code for bugs, something is rarely discussed. A sited example of insecurities in prominent open source software was that of the MD6 hashing algorithm, intruced at Crypto 2008, where despite being designed and developed by a very clued up team still had a critical flaw in it’s implementation.
My ultimate take away from this talk was that software’s licensing model has no direct impact on the security and vulnerabilities of any codebase, only the development model and developers themselves have any real impact.
SSL/TLS – Just when you thought it was safe to return
Arron Finnon (Finux) gave a great presentation on vulnerabilities and weaknesses with the implementation of SSL protection. Arron argues that most problems with SSL are actually related to the implementation rather than methodology itself, and that despite the high profile of problems related to SSL most techies still don’t ‘get’ it; and most users, regardless of user awareness training will continue to blindly click through the cert warning prompts.
Several of Moxie Marlinspike’s tools were discussed, mainly SSLStrip and SSLSniff. I was aware of both tools, but hadn’t tried them out in my own lab yet, after Arron’s discussion of the problem and capabilities this is definitely something that I intend to rectify shortly. Especially when combined with other SSL issues, including the SSL renegotiation attack and the Null Prefix[.pdf] attack issues with SSL can be deadly to an environment.
Main takeaway from this talk was that SSL isn’t as secure as some would state, and that when planning to defend against the attack vectors we need to stop thinking ‘what if’ and start working towards ‘what when’.
AppSensor – Self aware web app
Colin Watson came back to the front to discuss the work currently being undertaken with the AppSensor project. The idea behind the project is to create web applications that are ‘self aware’ to a lesser extent enabling any user making ‘suspicious’ web requests to be limited or disconnected to limit the damage that they can cause to the target system, and works on the premise that the application can identify and react to malicious users in fewer connection requests than the user needs to find and exploit a vulnerability.
The identification comes from watching for a collection of red flags and tripwires built throughout the system, from simply looking for X number of failed log-in attempts to real-time trend analysis looking for an unusual increase in particular functionality requests. A lot of the potential indicators and trapped reminded me a lot of an old post on the Application Security Street Fighter blog, convering using honeytokens to identify malicious activity, which I’ve covered previously.
Summary
Overall I really enjoyed the event, I’m hoping that the Leeds/Northern OWASP chapter decide to run more events within Newcastle, but if not it’s convinced me that the events are worth the time and cost to travel down to the other locations. Always good to discuss infosec topics face to face with some really knowledgeable people.
–Andrew Waite

InfoSec Triads: Security/Functionality/Ease-of-use

Triad: Security, Functionality, Ease of Use
Triad: Security, Functionality, Ease of Use

Following from an introduction of the C.I.A. Triangle another triangle is used to help explain the relationship between the concepts of security, functionality and ease of use. The use of a triangle is because an increase or decrease in any one of the factors will have an impact on the presence of the other two.
As an example, increasing the amount of functionality in an application will also increase the surface area that a malicious user can attack when attempting to find an exploitable weakness.
The trade-off between security and ease of use is commonly encountered in the real world, and often causes friction between users and those responsible for maintaining security. Microsoft had long been targetted by the security community for allowing everyday users to operate the system with administrative or system level permissions, which resulted in any exploit targeting a userland application was immediately given access with full rights. When Microsoft tried to limit this functionality by forcing users to specifically request elevated privilages via User Access Control (UAC) there were high number of complaints from users who weren’t happy with the extra actions required to complete tasks. As a result many instructions and guides were created to teach users how to disable the UAC functionality; increasing the ease with use and decreasing the steps needed to perform some tasks, but at the expense of disabling an improved security system.
A recent blog post discussing the security of windows operating system states, quite colorfully that:

Windows is an open cesspool to anyone developing applications. Developers can store information anywhere in the registry and store executable components anywhere in the file system – this includes overwriting existing registry entries and files. They can also write “hooks” to intercept, monitor and replace operating system calls to do fancy things. While all of this is great from a functionality standpoint, it’s also the main reason why Windows can never be secured.

Leaving the bias and hyperbole of the above, rightly or wrongly developers are able to write to the filesystem, registry and hook API calls in order to provide the functionality expected and requested by end users. From this standpoint no functional operating system will never be 100% secure, what every system and ultimately user must settle on a compromise between acceptable functionality and usability, and acceptable security.
–Andrew
<Update>:
I’d been looking for this Dilbert strip when writing the post, just came across it now, enoy:

Dilbert - Security trade-off

</update>

Breaking into the loop

For those that don’t know, the first run of the InfoSec Mentors programme got off the ground this week. My mentee, Jacob (@BiosShadow), has released his first blog post, Being out of the Loop, since we were paired together and it has struck a chord with my own experience with similar problems and got me thinking at the same time.
To sum up Jacob’s post he his frustrated that there is no active infosec community within a reasonable distance of his location. I can relate to this sentiment; when beginning the slow road towards an infosec career, before I was even aware a legitimate career in the field were possible, my own interactions and learning opportunities were limited to reading books or online articles. The online infosec community is, in my opinion, great; and the debates, relationships and friendships that can be developed is something that I don’t think is found in any other industry, but you can’t fully replace the benefits from a face to face meeting.
One of the early mistakes I made was assuming that information security and computer threats were only of interest to those solely focused on information security. I’ve since discovered that different aspects of infosec can be of interest or concern to anyone in the IT or business arenas, and I’ve been involved in some interesting discussions and debates on information security with some very intelligent people at some of my areas flourishing IT user groups whose focus isn’t infosec specific. It was one of these groups, SuperMondays, that provided my first public speaking opportunity and despite my topic, honeypots, being quite specialist I believe the talk was very well recieved.
Since first finding SuperMondays I have continued to discover a great and vibrant IT community in my region (NorthEast England), and I’m continually surprised when I continue to stumble upon other groups in the area like the Northern UK Security Group (must find time to travel across to Manchester) and I am eagerly awaiting the local-ish OWASP chapter from Leeds heads North to Northumbria University next Wednesday.
If you feel the same frustations as Jacob and myself (and others), throw a message out on twitter or your blog etc. letting people know who and where you are. You just might be surprised to find others in your area with the same problems.
For my part: If you’re in the North East of England (or general area) get in touch and let me know, always keen to discuss information security with anyone willing to listen.
–Andrew Waite

InfoSec Triads: C.I.A.

Triad: Confidentiality, Integrity, Availability
Triad: Confidentiality, Integrity, Availability

Information security is a far reaching and often all encompassing topic, but at it’s core information security and the protection of digital assets can be reduced to three central attributes. These are Confidentiality, Integrity and Availability; often referred to as the CIA triangle (not to be confused with the US’ Central Intelligence Agency).
Each factor provides a different and complementary protection to data and all three must be sufficiently preserved to maintain the useful of the information and information systems that are being protected.
Confidentiality
Maintaining data’s confidentiality requires ensuring that only those users and/or systems authorised to access the stored information are able to access the protected data.
As a result this aspect is often what first comes to someone’s mind when thinking about information security, the act of preventing those that shouldn’t be able to access the data from doing so. Some of the most commonly understand security systems and processes fall into this category, for example requiring user authentication in the form of a password, or preventing remote access to a restricted resource with a firewall. Removing technology from the process, this is the equivilent of using a lock on the office filing cabinet.
Integrity
Maintaining data integrity involves ensuring that the data remains correct, whilst in storage or transit, and that only authorised changes are made to the data.
At a highlevel data integrity can be protected using similar controls to those enforcing data confidentiality discussed above, if the data can only be accessed by those that are authorised to view and modify the data are able to access it in the first place then data integrity must be enforced. Unfortunately this is only part of the story as it only protects against the data’s integrity being compromised by a malicious third part. Data integrity can also be compromised by an authorised user changing the data in error, a program handling the data could contain a programming or logic flaw resulting in it changing the data in a way other than desired, or a hardware error could result in the stored data becoming corrupt.
Hashing algorithms like MD5 or SHA1 can be used to determine if the contents of a file has been changed, but this cannot determine if the file has been changed correctly. This highlights one of the key problems within the realm of information security; while data integrity falls under the remit of security there are many different factors, and in the business world many different individuals and/or departments, that can have a direct impact on data integrity; and the overall protection is only as strong as the weakest link.
Availability
Maintaining availability of both systems and information is crucial for most IT professionals to continue in gainful employment. As a result a lot of tasks geared towards ensuring systems availability are already incorporated into most business practices, including frequent backups and maintaining standby systems to replace production units in the event of a failure.
Availability of data can be attacked by a malicious user, application or script deleting the data itself; a newer form of attack, ransomware, follows a general trend of monetizing computer attacks involves preventing the legitimate users of the data by cryptographically protecting the data with a key know only to the malicious parties, who then attempt to extort money from the victim in return for the kep to unlock the data.
Alternatively denial of service (DoS) attacks prevent legitimate use of a service by utilising all resources of the server. Like ransomware DoS attacks can be monetized by a malicious party, either before the incident with the potential victim required to pay-up to prevent the attack from occuring in the first incident or being forced to pay to stop an incident that is already in place. Online betting sites were among the first of those businesses to be threatened in this way. The IACIS has a good paper on the topic of cyberextortion [pdf].
— Andrew Waite