Tuesday, November 17, 2015

Respect for developers


At work, we tend to spend time that we’re not on projects however we want. Ideally, however meaning - technically however – not being on Facebook 8 hours a day. But jokes apart, usually someone picks up a skill that he/she isn’t familiar with and tries to learn the same when not on projects. Since most of my professional life in information security has been to break things, I’ve always been curious on how things are on the other side – to build things. What challenges does a developer face? Why is there still such a lot of truly awful insecure code still out there? This despite there being tons of resources to learn from? After all, as a hacker that’s how I’ve learnt all my life – don’t know how to hack a new technology? Go read. Learn. Master. Hack. Why should it be any different from developers?
And so, I decided to fix a ton of bugs in Whetstone, our internal appraisal system which our boss wrote while he was on paternity leave. He takes his vacations seriously, as you can see. Whetstone was written in Ruby on Rails – which is one of my favorite languages for web development – just because of how easy it is to get started. So I go, ooh nice – how hard can it be? And that’s where it all began going wrong…
So obviously, I can’t gut the whole thing and start re-writing the entire system. I have to fix bugs that currently exist. I launch GitHub and see 78 existing bugs. So now I have to prioritize which ones to fix first. I start the process. I pick a simple one to start off with and quickly fix it. I’m excited. I want my change to be visible immediately. Oh but wait, that’s not how it goes. Apparently, git has something called branches that I first need to learn about – so as to not mess with other people pushing code back as well. That puts and end to my coding for the present, and I spend 2 hours reading about how Git handles versions and branches and why its better than SVN and so on. Then I push my code and see it appear on Github. But then someone has to “pull” my changes or accept them. And clearly, while I have permission – it makes no sense to approve my own code – that defeats the purpose. And boss is on leave. No push. No warm fuzzy feeling of first ever fix. Process problems.
Anyway, I’m now feeling better and pick up a few moderately complex issues. It seems I know how to fix it. Code. Compile. Fail. WTF. Google. Stack Overflow. Fail. As it turns out, I’ve upgraded Rails to 4.3 or something and all the fixes on the Internet which have up-votes on Stack Overflow all fail. Somehow after a lot of trial and error I find a fix, but it has taken way way longer than I’d ever expect it to take. And note, this is a simple internal application. If I fixed bugs at that speed in production, as a developer – I’d probably be fired in a week. Skill problems.
Then, I get put on a billable project and forget all about this for a bit. Boss comes back, my 4 changes are accepted. Am I having fun, he asks? Yes yes of course I say, drunk with the success of my 4 massive quashed bugs. Okay, have fun – fix some more says boss. So 2 months later, I pick it up again. So let’s start now. Er, why did I write this code this way? F***, it was so long ago – I should have commented. Lets look at some old code, maybe that explains it. Er no, no comments there either. Just some fancy one-liner SQL looking query. Ha, I’ll just comment the old code then and write new fresh code! That’ll fix it. For sure…… 2 hours later. Undo. Undo. Undo. Okay that broke everything :| and clearly I can’t code. I suck.
And now, after all the undoing, something else isn’t working too. Screw it. Let me revert to a clean state. Let me just delete that stupid whetstone_old directory that I created. And I’ll rebuild everything from Git again – from my old state. Delete. Rebuild. Ugh. Read Me isn’t good. How the hell did I build it last time? What does this error even mean? Note here, that I have NOT fixed 1 single bug yet… 1 hour later – AH so all you needed to do was change the config file entry? Okay. Anyway, let me first collect all my old notes from whetstone_old before I forget stuff like this. Don’t want to waste time.
Eh. Where’s whetstone_old?? Oh no!!! Don’t don’t tell me it was in the directory I deleted #-o. Yes. It WAS in the directory I deleted. Woo Hoo. All gone. I’m screwed. Backups are important. There’s a reason you backed up to whetstone_old. Why would you delete it?
Another day wasted then. Okay okay, now lets fix bugs. Ah, here’s an easy bug – “Display last login time for a user”. 10 minutes. I got this. Just print a date out in the view. Adds date to view. But this is just printing the current date each time ffs :-o. We want it for each user. Oh. That means a new column. In the user database. I know a little MySQL though, from all my SQL injections over years gone by, so this should still be quick….
Error. Can’t connect to MySQL database. Eh? Different port? No. 15 minutes. Can’t figure out where the DB is. RTFM. Sqlite. NOT MySQL. Now if I want to debug anything, I need to learn Sqlite querying. Learn how to use a database. NO ffs I have still not fixed the bug. Okay, now I understand sqlite. But how do I add that column? Learn Rails migrations. Wow. Another half an hour. Add column. Finally fix bug. It lunch time. I should stick to hacking things – I clearly am not good at this stuff. But wait, now I got it.. I know how the DB works, so things should now work out...
Okay, lets login and see if our date prints right. Just to check lol. Obviously nothing can go wrong. Hmm, looks okay. Oh wait, we need to do some stuff with the Date. Why do none of the in-built functions work? Oh no, its not a varchar – its DateTime – only then will some functions work. Or I’ll have to write my own functions. Which would be stupid for such a trivial task. So I have to change the data type in the DB. Learn how to change Rails migrations. And while that looked simple, it didn’t work. The accepted solution was to “DROP TABLE” and “Recreate TABLE”. Ugh.
Now remember, I hadn’t created the table in the first *^%$#@! place. So I now have to find out the structure of every single column of that table and re-create it. Screw it. I don’t want a fancy date. I’ll just leave it as it is and go on… But now that failed migration attempt has screwed something else up – and all my normal code that I never never touched is not working. Fantastic. Things you never coded can break despite you never touching it.
Meaning, I now have to learn the entire old schema. Luckily logging was turned on by default, and the old old old log file was never truncated. Extract the old old query. Convert it into a valid Sqlite query. Drop old table. Create new table. Okay error gone. But now no date display. WTF!!! That’s cause recreating the table destroyed my Rails migration. So do that again. Finally done. In short, playing with the DB isn’t particularly fun, and any re-architecting brings up weird weird problems.
And after all this, all I’ve done is fix one lousy feature request that displays the last login time. Okay let’s move on to the next bug. It’s a security issue now, user IDs are generated sequentially and that’s a problem – a hacker could enumerate all valid numbers. So I need to randomize stuff. Meaning I have to change the data type of a column. Right now, the id column is an integer, I need to convert it to a GUID. Looks very similar to the previous DateTime problem haha. I got this one for sure. Migrations. Change. Blah. . Er. Why did something else break again? Why is nothing working now? Why can’t I even login? Panic. Turns out that id was the Primary Key for the DB. And messing around with the primary key is a bad bad idea. Even worse than messing around with a database. Don’t do it. Just just don’t do it. Deny that feature request. Learn how to change a primary key.
This is terrible. Is this what fixing other people’s code is like? And this is for a small internal application which is relatively well written, compared to some of the bloated code that I’ve seen over the years. Anyway, more hours wasted – I figure out what to do. But now, for some reason I can’t find out where the hell to put the ‘randomizing code logic’. A few more hours… oh wtf.. you know by now that I’m spending more time fixing stupid knowledge gaps than actual bugs.
As it turns out, my boss (correctly) decided to not write a line of code for authentication, and relied on a third party Ruby Gem instead. Well whoopee doo doo. #-o Why am I sarcastic, you say. It means that I now have to hack on some third party module code instead. Start learning third party module code now. That’s another fail, and I have to contact the module owners who patiently explain what to do. Eventually it all works.
Lets look at Bug 3 now. All you need to do is to make a really simple UI change. This CANT be hard ffs. How hard can it be? But by now, I’m in a numb state – coz I’m fairly sure something will go wrong. Oh look. Its CSS. And CoffeeScript. Do I know either? NO. Learn CSS. Learn Bootstrap. Learn Coffee Script. The saddest part is I just need enough to fix a tiny bug, but without knowing the basics – I can’t do it. More time gone.
And this pattern repeats.. again and again and again. Eventually of course, as expected (else I won’t have a job) I improve and start fixing things quicker. But something or the other always always breaks – when you least expect it to. And when its not your code, its harder to fix it. And now I think of those massive banking applications, when the chief architect and the lead developer quit a while ago, and there are 5 new developers. Shudder.
And I still have tons of bugs left. I find my mind thinking of shortcuts and dirty ways of solving problems all the time. I find myself thinking of how to somehow get that number of bugs down. Somehow. Quickly. So I can get back to actually writing new cool stuff. It isn’t easy to maintain that mental discipline. Honestly isn’t. And this is for an internal app, which isn’t critical and with no deadlines except those that I set for myself. I have a new found additional respect for developers.
I always knew a developer’s job is trickier than that of a hacker – and I’d like to honestly say I’ve always been respectful to every dev I have met. But I’ve never sympathized with one. I’ve always felt that it is yet another job that one learns to do well over time. And that’s true, sure – but they do work under greater pressures than I do. There’s other skills in my job as a white-hat hacker, other traits that maybe developers don’t need to learn – but whatever a developer does need to do – isn’t easy. IF IF you want to do it well.
Lastly, I want to tell every white-hat hacker to put on the defensive hat once. Actually code. Code things that people will look at and break and tell you how awful your hacks are. And how you should test more… before releasing to production. See how it feels. :) If not anything else, it’ll make you respect the developer community more than you already do. And that’s how it should be.

Tuesday, February 10, 2015

Using the Call stack to debug programs

In large pieces of malware it is difficult, when under time pressure to fully reverse it. So, many times you just put a breakpoint on the imported functions like say send() for outbound TCP connections. Then you run the malware.

What happens now, is that the BP will be hit when the malware tries to send traffic. And that's fine. But where was it called from? As in, which function actually made the call to send()? This is important because it'll help you go back and then find out how the payload was constructed, how the C&C was decided... and many other things.

There's 2 ways to do this. One, you can identify all the references made to the send() call by the binary, and set breakpoints on all of them. Then when one gets hit you can inspect it further. This is the most obvious way to do it.

The other way is to first, as usual set a breakpoint on the function you want to trace. Here I just choose SetUnhandledExceptionFilter() as that's the first function I can see :). The address immediately after that is 0100251D - make a note of that.











Then, instead of searching through the 123 references to the call and trying to guess which one is useful, look at where the function is returning to when the  breakpoint is hit. This can be found on the Olly call stack. Many times you have to step through a lot of system function code, but eventually, it will return to user code. I tend to use the Ctrl+F9 while in system code, and keep watching the call stack for when the address changes to one where the code is running from.







Then I visit that address, set a breakpoint and hit F9 again. This time I will break inside the exact place where the call was made from. Note the address? :)


Then we go to that address and set a breakpoint and run...and we'll break in the exact place where the call was made.


Debugging child processes - Olly 2.01h

A lot of malware creates new processes and injects the actual malicious code into the memory of that process and then runs it from there. Meaning..you could start debugging malware.exe...but then find out that it has called CreateProcess() and created a new child process boo.exe. Then it called VirtualAlloc() and allocated memory into some part of boo.exe.

The point being... malware.exe has nothing and you now need to debug boo.exe. There's a few different ways I have tried, all to varying degrees of success.

- Get PID of the Child process from Process Explorer. Then go to Olly and Attach to process.
- Set Olly as JIT debugger. Windows Task Manager. Right click - Debug
- Set Olly as JIT debugger. Set the entry point of the process to CC (a breakpoint) and run the process.
- Try and attach Immunity to the process while debugging the parent in Olly.

But none of that, for multiple reasons worked at all... or worked very sporadically. That sucked.

Luckily OllyDbg 2.01h has this new (well for me ;)) option to debug child processes created by Olly. Meaning, if I'm debugging malware.exe and it creates a new process..and you have this option checked, it'll automatically open a new window and load boo.exe into it :). Perfect.

You can find this option in Olly 2.01 (latest version) in Options - Events. Tick the box that says 'Debug child processes' and save yourself a lot of pain :).

View PE header - Olly 2.01

Many times we want to view the PE header to study certain fields. Its easy enough to do this with a million PE editors out there. You could even do it programatically - I use the pefile module in Python to do it. There's almost certainly others. But the point is... that I love Olly :). How can we do it in Olly?

Well, in Olly 1.10 you could go to the PE header location in the Dump section, right click and select Special - PE header. And this would perfectly parse the PE header and display it in the Dump. Or just go to View - Memory Map, right click and select Dumo.

But Olly 2.01 doesn't have this. The option seems to have changed. I had trouble finding it but finally managed. There is no Special -> PE header menu but a 'Decode as structure' menu.  Here are steps on how to view the PE header in Olly 2.01:
  • View - Memory Map. Locate PE header for the exe. Get the address from here.
  • Go to the Dump section of Olly. Ctrl+G - enter address.
  • So you can right click on the start of the PE header (MZ)  - Decode as structure and in the Drop down box there select IMAGE_DOS_HEADER. Now you see just 1 section decoded.
  • Scroll down a bit till you come to the PE header. Right click on the PE header (anywhere...but make sure its inside the PE header, not the DOS header - Decode as structure - IMAGE_FILE_HEADER.

And so on.. use all the different IMAGE_ structures to decode the entire header. Its a bit of a pain really, and I preferred the old way :( or maybe just use another tool - just not Olly 2.01 for this. Its good for other structures though..probably, as you might have guessed.

Extracting PE files from memory

Was recently trying to debug some malware that someone gave me. The malware was extracting itself into memory and working from there. It is possible to debug the malware from memory itself but its a bit painful, since we have to debug the entire memory each time, let the memory get populated with the malicious code and then try and understand what the code is doing. This is a waste of time - and if we can avoid it, we should.

As it turns out, its possible to dump contents from memory. This is specially useful if its a PE file that's unpacked into memory. Coz.. you could dump it and reverse it separately without all that running of the original malware that unpacked it.

I always knew this was possible but for many reasons (a laziness to learn being foremost ;)) I never did it. This time, I was determined to figure out how to do it. Turns out it is fairly straightforward. All credit for me learning this goes to this blog - http://www.joestewart.org/morphine-dll/

That blog should be self explanatory really, but here were my steps.
  • Load process in Olly and debug it as usual.
  • Once the process has loaded in memory, locate it in the Dump section of Olly.
  • Right click inside the dump. Backup - Save data to file to an EXE file on disk.
  • Launch a PE Editor (I use LordPE). Locate the last section. Add the RawSize and RawOffset. Record the number.
  • Open the hex editor and go to this number (offset) inside. Delete all content after this number. Usually this is all zeros.
    Now go to the start of the file. Delete all content inside the file upto the start of the header (4D 5A). Save the file.
  • Open the file in Olly...and there you go.. a normal EXE file.

I tried some plugins like OllyDumpEx but they did not work for me. They probably are fine - just that I mostly made a mistake while using it. I will try some nice plugins soon and update this post when I am done.

Hope this post helps someone easily unpack malware from memory manually. Its fairly easy and you do not need a plugin to do this :)

Wednesday, February 4, 2015

Volatility - Extracting malware signatures from memory

There's a million tutorials online on Volatility and how to use it. This post will teach you nothing new. It is just my own way of learning the tool. All I'm going to do here is to go through each and every plugin which is listed very well here and very well explained here and make my own notes. If you're starting off with Volatility don't read this post - go read the official documentation.

The purpose quite simply is to just help me remember the tons of plugins that Volatility has, so I can use it while performing malware analysis of all those dangerous pieces of malware.

I'll use the memory images of Shylock downloaded from here to practise running Volatility's plug-ins against. This image can be downloaded from here. The images that I got from the Volatility Git repository (git clone) didn't work for some reason.

- The -f  (image name) and --profile (what OS the image was extracted from) switches are used in almost every command.

- Imageinfo suggests profiles while kdbgscan definitely identifies the best profile to use. Sometimes though kdbgscan also identifies multiple processes. In such cases look at the number of processes that it identified and proceed accordingly.

- pslist gives you a list of processes in the memory. psscan does the same but includes hidden processes which are there maybe, coz of rootkits. pstree does the same but gives you a nice view like Process Explorer does on disk.

- dlllist is a nice plugin that gives you the DLLs loaded by a process. Its best to identify the PID of the process you are after using the previous plugins and then use the -p switch with this plugin. dlldump is the next logical step, you find a DLL and try and dump it. Again, makes sense to focus on a specific process.

- handles, while super-verbose is a nice way to quickly see what all stuff a process is referring. Use the -p and the -t here to filter output. Its very similar to the Sysinternals handles utility.

- cmdscan lists out all the commands an attacker typed and consoles goes one step further by listing the exact output that the attacker saw when she typed it.

- connections lists out all the connections that were active when the image was captured. connscan is cooler in that it also identifies connections that were terminated.

- sockets lists all the listening sockets as well. sockscan does the same thing but in a different way. netscan does the same thing but for different platforms.

- hivelist searches for registry hives in memory. You can actually print out entire registry subtrees using printkey with the -K option which searches all the hives in hivelist's output above and returns the keys with its values if found. hivedump is super-verbose and recursively prints all the keys found.

- hashdump gets stored credentials from memory of all the local OS accounts. You could grab hashes from here and then crack those offline.

There's plenty more and I'll keep adding to this list as I play with them over the next few days. :)

Tuesday, February 3, 2015

Writing ClamAV signatures

Obviously while learning about malware analysis it is not enough only to know how to reverse malware. I should know how to protect against them as well. So I wanted to learn how to write signatures really well - it could be useful. So I will learn how to do so using the following:

  • ClamAV
  • Yara
  • Suricata
  • Snort
This post... I'll start with ClamAV. Here are my rough notes. The aim is to be able to refer to them over time. They are not the most polished blogs ever nor do I intend to make them appear to be that way :)

Here goes.

----
Stop updater daemon: sudo /etc/init.d/clamav-freshclam stop
Update now: freshclam
Update but run as a daemon: freshclam -d
Stop clamd daemon: sudo /etc/init.d/clamav-daemon stop/start. Don't try and start it from the CLI

Send clamd commands using socat:

  - socat - /var/run/clamav/clamd.ctl. Then type command. If started in TCP mode though, you can normally telnet to a port.

  - The connection to the socket times out though fairly quickly by default, so I'd have the command copied to clipboard :)

  - List of all commands available in the clamav documentation on page 17

Scan files: clamscan /tmp/virus_test
Scan files using clamdscan: clamdscan - < /tmp/virus_test

Use libclamav to scan files from inside other software. Can be used with C programs only.

Info about a database file: sigtool --info



Creating signatures:

- Make sure that you unpack the binary before doing this, else it's not very useful.

Hash based signatures:

sigtool --md5 test.exe > test.hdb
clamscan -d test.hdb test.exe
The moment a single byte changes, this signature will fail

Extract sections from PE file and create a signature for each section:

Use my hash_sections script to do this. Another option is to save all sections and use sigtool --mdb

Remember sections with a zero size will cause clamscan to break so don't add those

Also remember that this method is best used AFTER you have done all your analysis and want to detect a packer.. so here you don't necessarily have
to unpack the binary before writing a signature

Similar files but minor difference in certain bytes

This means that it is the same malware but hash based signature or section-hash based signatures will not work. For example:
md5sum 03*
  ae831fcf5591dc0079ebfe4654f23f52 031.exe
  b20a1db0a01f7a6f14f503a6fcdd6c0f 03.exe

Here's a sig where the {4} are the only bytes that change. Save these manually into a file with an ndb extension:
TestSig:1:*:8DBEEB7FF7FF57{4}10909090909090
TestSig:1:EP+6:8DBEEB7FF7FF57{4}10909090909090 [EP is entry point, and this can drastically reduce false positives]

Same logic but with much more powerful signatures. These must be stored in ldb files:

The difference here is that everything is separated by ; characters. All the patterns are right at the end. The penultimate block is the one that decides how this pattern is actually applied, the previous block decides which files the signatures is applied to. The first block is just a name..can be anything.
LogicalTestSig;Target:1;(0&1);8DBEEB7FF7FF57{4}10909090909090;88074701DB

Whitelist files

Same name as the database in which the detection signatures exist. So if all signatures are in daily.cld

The whitelisting file should be by the name daily.fp and have this line (hash : size : random name) in the same dir as daily.cld
5523530941c409b349ef40fa9415247e:51204:Whitelist signatures

This is despite a BAD signature being there in the daily.cld... it'll just IGNORE the bad one

Whitelist specific signatures

Same name as the database in which the detection signatures exist. So if all signatures are in daily.cld

The whitelisting file should be by the name daily.ign and have this line (goodDBname : line number : Actual signature name) in the same dir as daily.cld
daily.cld:1237270:Win.Trojan.Agent-807822

This is despite a BAD signature being there in the daily.cld... it'll just IGNORE the bad one

Some nice References:


The sample files, scripts and its signatures can be found in my Git repository - https://github.com/arvinddoraiswamy/lemal

Friday, January 30, 2015

A malware sample I analyzed

Recently I analyzed a malware sample. I don't know what it was or whether I completed it but I stepped through it and wrote a very detailed report about it that I'd like to share now.

It is completely possible that I have missed things in it, but honestly anyone reading through it, specially if you're at the beginner-intermediate level should get some useful information from it.

I'd love to hear more feedback on how things can be done better, and if anyone has indeed analyzed this deeper and better than me - do call me out.. and if you can get in touch with me somehow so I can learn :)

I started a new repository on Git just now - to add a lot of my random stuff that doesn't really have a specific home. Here's the link to the PDF report (no it is not malicious :)).

https://github.com/arvinddoraiswamy/blahblah/blob/master/somevirus.pdf

I cannot see how I can upload the sample to offensivecomputing so here is a link to a virus total analysis instead. I guess anyone interested should be able to find a sample using the hashes on this link.

https://www.virustotal.com/en/file/5564bed78d23ad0ad198a0dbbf4196f5fdcc1eb8529673941736db18c3257e0b/analysis/

Tuesday, January 27, 2015

Statically linked binaries - Library detector

A program can be compiled dynamically or statically on Linux. For simplicity's sake - I considered only C binaries. When you dynamically compile a program the libraries do not get included into the binary itself - the functions that they export are called at runtime. In a statically linked binary however, all the libraries that the binary needs ... to run... are part of the binary itself. And that if you are reversing something...is a pain. Coz you don't know which part of the code is the binary...and which part is library code. IDA detects a lot - but not all of it... not enough ..for sure. So I decided to try and so something..

This little project came into my mind primarily while playing the reversing challenges in CTFs. The files there used to be massive (4 digit numbers of functions) and very difficult to solve (for me anyway :)). I would never be able to identify which code was library code - in the case of statically linked binaries. Thus I could never complete those challenges OR it took me a lot of time. I still can't complete many but that's a separate story ;)

Anyway TL;DR I wrote a few simple IDAPython/Python scripts that basically compare the IDB of the binary to be reversed and a whole lot of library code. The more idea you have about the exact libraries that were used while building the binary - the more accurate this tool will be.

It is certainly a start to a fairly complex problem IMO and I hope that people more knowledgable than me in this space, can extend this and make it even more useful. At the very very least, I hope it will at least show people what NOT to do while attempting to solve this problem :)

The code I wrote can be found here.

Hopefully over time - I can make this even better or maybe find a better solution to this problem.

Friday, January 23, 2015

Anti debug mechanisms - Windows

Been busy with some stuff so haven't got time to blog much at all. Anyway I was playing the Flare challenge and the last one was challenge 7 which was a 32 bit Windows PE file. I haven't yet managed to complete it due to some silly detail that I've overlooked :(. All the same though - there were a few really nice Antidebug and AntiVM mechanisms that I learnt about and thought of sharing. If I wait for the challenge to get over - I might end up never writing it :).

IsDebuggerPresent: This is one of the oldest tricks to detect a debugger. Usually the malware will call this function and check its return value. If its 1 it means it's being debugged.

PEB IsDebuggedBit: The PEB is a block that contains a lot of information about the currently executing process. One of the first fields in this structure is the IsDebugged bit. If the application is running inside a debugger, the value of this bit is 1.

SIDT: The IDT is a data structure that has the addresses of numerous functions that are called when specific interrupts occur on a machine. SIDT stores the addresses of the current Interrupt Descriptor Table (IDT) into a register. On a normal machine, the address of the IDT is lower than 0xd0xxxxxx. If the address is greater than that, the malware is running inside a VM.

VMXh:  There is a privileged instruction called IN. Meaning... it will run only in kernel mode - a normal user can't write an assembly program and call it. When in eax,dx executes, it fills up ebx...and if it has 'VMXh'... it's running inside VMWare. So malware will do this check as well to detect if its running inside a VM.

OutputDebugString: This API will try and print a string. That's it. But it'll be successful only if the program is being debugged. You'll see the string that was printed in the Log window of the debugger. Malware will probably check if the function succeeded and make a decision accordingly.

CC bit checking: The single byte CC stands for a software breakpoint. The moment you set a breakpoint, while debugging a program, the byte on which you set your program is set to CC. Malware might search an entire range of addresses for the presence of any 'CC' byte and exit if it finds one. Meaning... if there is a CC byte - there is also a debugger. There's no reason for one to be there if it runs normally.

NTGlobalFlags: If a program runs inside a debugger the offset 68 of the PEB (NtGlobalFlags) is set to the value 70. This value is set based on the values of some heap manipulation flags. Malware will check for this value and accordingly make a decision.

Apart from these 6 checks, the Fireye challenge also used the LocalTime64 API to check if the time was between 5 and 6pm on a Friday and would go down the wrong path if it wasn't.

Your filename on disk needed to be backdoge.exe. You needed to be connected to the Internet so you could get a couple of IP addresses (192.30.252.154 and 192.203.230.10) which was also used. Lastly it also retrieved a specific string 'jackRAT' from a Twitter post and used that as well.

Basically, each of these checks caused the code to XOR with a different string. If you went down the wrong path, it'd still XOR, but with the wrong string and your end result - which is supposed to be another PE file would be incorrect.

So that's where I'm stuck :( - I think I have all the checks down - but I'm missing some fine detail .. somewhere and getting an invalid file. Oh well, maybe someday - I think I will look at one of the online solutions now. I've spent a lot of time on this without much success.

Here are some references though which I found very useful:

http://handlers.sans.org/tliston/ThwartingVMDetection_Liston_Skoudis.pdf
http://joe4security.blogspot.com/2012/08/vm-and-sandbox-detections-become-more.html
https://brundlelab.wordpress.com/2012/10/21/detecting-vmware/
https://www.veracode.com/blog/2008/12/anti-debugging-series-part-ii
http://www.symantec.com/connect/articles/windows-anti-debug-reference

I hope you learnt a few things. None of this is really new..honestly... but hey, it's just a place where I keep writing my thoughts out as I learn things.