Finally, the time for me to start looking into Android application codes has come. Apparently, the vast growing of mobile technology has forced payment brands to speed up their decision on the acceptance of using mobile phones as point of sales.

To say that mobile phone is now accepted by the payment brands to capture credit card data may not be correct, you still have to attach an additional device which has certain security characteristic on it. However, a mobile application may or may not be required to process the data captured by the reader and forward them to the transaction management sytem.

Android-based phones are of course take part of this whole mobile payment system, hence at some point, the security characteristic of the application must be verified (to some standards). This is where I started to be forced to learn how to understand/read Android application codes.

Unlike what I called “conventional” programming, Android application does not come with an explicit “main” function. To me, (step 1) the starting point of analysis is the AndroidManifest.xml. I guess like flight manifest which contains list of passengers, AndroidManifest.xml contains classes which are utilised during the lifetime of the applications. To pinpoint the main activity, I look for the following pattern:

<activity android:name="MAIN_ACTIVITY_NAME"
              android:label="@string/app_name">
        <intent-filter>
            <action android:name="android.intent.action.MAIN" />
            <category android:name="android.intent.category.LAUNCHER" />
        </intent-filter>
    </activity>

In the above code, I would suspect that the main activity will be in MAIN_ACTIVITY_NAME.java. Where MAIN_ACTIVITY_NAME class resides. Next (step 2), I will look for the “onCreate” property of the class. From there, it is just “conventional” code analysis.

When loading a key to the device, it is advisable that the key to be loaded is encrypted with a higher key in the hierarchy to add more security layer. Why don’t just inject a clear text key ? Well, if it is in clear text, definitely the person who injected the key has the knowledge of the key; that person is not always you.

Imagine the following scenario. You produce an ATM machine, during the production a Terminal Master Key is loaded into the ATM (this key loading/key injection must be performed in a secure site, implementing split knowledge and dual control). For transaction purpose, this ATM require a PIN encryption key (PEK) which is loaded when the ATM is already installed somewhere. Assuming that the ATM does not support Remote Key Injection (RKI), then you (or someone else) have to go to the ATM site and inject the key.

For security reason, the PIN key to be loaded is encrypted with Terminal Master Key before it is carried to the ATM. Even after it is being encryption, usually it is splitted into components. Now, if the key or key component comes in clear text, it would be very easy to check whether the you have typed the correct key when it is injected. But that is not the case. So, how can you be sure that you have injected the correct key ?

When injecting an encrypted key or key component, usually the key comes with a Key Check Value (KCV). This value is actually a result of encrypting string of zeros with the key (using the chosen algorithm).

So, when you inject the encrypted key (say PIN key), the ATM perform a decryption operation using TMK. Once the clear key is obtained, it encrypt string of zeroes with the PIN key and return the result (not all, probably just the first six character). The resulting string is the Key Check Value (or part of the Key Check Value).

 

When I hear the phrase “Key Management, the first thing that comes to my mind is always “key”. Of course! It’s key management anyway. Now, I’ve read through several tutorial, books, guide, and standard related to key management, and all seems consistent; which is good.

In key management, there is a “key hierarchy”. As in a company, there is a hierarcy involved in managing keys. So, we can view each key as a manager.

The upper level key is analogous to the upper level manager. What does a “good” upper level manager do ? They protect their subordinate. And that is what the upper level keys do.

The formal terminology for the upper level keys is Key Encrypting Key, or maybe I should write it as Key-encrypting key.  As the name suggest, a Key-encrypting Key (KEK) is used to encrypt a key. Does it make sense ? well, that depends. Imagine that you send your house key to a friend since you’re going to travel overseas. A courier picks up the key at your place and delivers that to your friend. Now, what if somewhere in the middle, the courier make a copy of the key ?That’s not what you want of course. So, then you have to provide some sort of protection to the key. Well, why don’t you send it in a locked box.

In the illustration above, Key-encrypting key is the key that is used to lock the box. Hence, the role of this key is to protect another key. Well, how to deliver this Key-encrypting key to your friend for him to open the box ? Well, that’s another story. We will discuss about this in a separate part called “Key Distribution” which is still part of “Key Management”.

Alright, let’s go back to “Key Hierarchy”. A Key-encrypting key along with all the keys which is encrypted under it are fall under one hierarchy. You may have a KEK on the top of the hierarchy, and on the second level you may have many other KEKs (encrypted with the KEKs on their upper level), and so on, and so on, and at the bottom level, you may have function keys. These function keys may be Data Encryption Key (DEK), PIN Encryption Key (PEK), etc.

Ok, that is all for Key Hierarchy.

I’ve been having this irritating problem with Samba and Windows 7 connection. Don”t get me wrong, not that I could not connect them, it’s just the slow response time that keeps bugging me. I’ve tried all the optimization guide that I could get from the internet, but still no improvement.

As I said, on the server side, I’ve tried all the optimization guide that I could find. On the client side, same story. Turn this off, turn that on, nothing works. I remember the only fix that works is to turn off windows web client service, which reduce the delay time when I right click the shared folder (before I mapped the drives).

Until recently, I realized that Windows 7 keeps looking for desktop.ini, folder.jpg and folder.png. I haven’t figured out what is the reason behind that, and how to prevent windows 7 from looking for it. I know that there are several guides, to prevent windows 7 from creating desktop.ini automatically,  available on the internet. However, my problem is not about the creation of desktop.ini, instead the searching of those three files that I mentioned above.

I figured out that problem while observing the traffic using Wireshark. But just searching shouldn’t be a problem, right ? I mean, how long can it be ? Well, I also found out that it’s not just the searching, but the response from Samba – when the file is not found – and how Windows handles that response causes a bigger problem.

I think somehow in the middle, when a file is not found, Samba and Windows 7 are not talking in the same language. But that is just my opinion. To make it worse, when that happens, I think Windows tries to process that unexpected response from Samba, or probably waits until Samba gives the expected response. Now, this could take a while.

As a temporary solution, I have created desktop.ini in all shared folders and sub-folders. Does it remediate the problem ? Yes, it does. Well at least to some significant degree. Yet, I haven’t even created folder.jpg and folder.png in those folders. But the drawback is that when a user created a sub-folder, and later someone or him/herself tries to access the folder, the access delay comes back. Since, there isn’t any desktop.ini in the sub-folder.

To close part 1, at the moment, my plan is either to figure out how to turn of that searching process; or how to create desktop.ini automatically when a user created a folder. Hopefully, I can resolve this soon.

 

 

I had a task which required me to have an active serial port on my laptop. Well, as most of other laptops, my laptop was not equipped with a serial port. Due to that reason, I had to create a virtual serial port to continue. Thanks to Josep Nygrin I managed to create a (pair of) serial ports.

In short, all I had to do was to utilised “socat” which usually comes with the latest Linux distro and executed the following command:

socat -d -d pty,raw,echo=0 pty,raw,echo=0

Socat returned with a message pretty much saying that a pair of virtual devices had been created /dev/pts/3 and /dev/pts/6 (might be different for each case). All I had to do was to start two instances of putty, one was attached to /dev/pts/3 and the other one was attached to /dev/pts/6. Typing in one instance of putty, will cause the other instance of putty showing the result.

Basically, all the socat had done was creating a “loop” through both virtual devices. More detail explanation can be found here.

Goal : looking for files that changes after certain time and excluding /proc and /dev directory from the search process.

Steps:

  1. Create file with current timestamp: “touch stamp”.
  2. Make changes to system.
  3. Find affected files: “find / (\ -name proc -o -name dev \) -prune -o -type f -newer stamp -print;

Point 3 is read as: find in / (root directory) all files that is modified after the creation of “stamp”, but exclude those which contains proc or dev.

This writing consists of my experience in studying computer forensic. The book that I’ve been using as a guide is “Real Digital Forensics: Computer Security and Incident Response” by Keith J. Jones, Richard Bejtlich and Curtis W. Rose. The publisher of this book is Addison-Wesley.

The first chapter is about conducting forensics on a Windows based machine which is supposed to be still active while the forensics activity is conducted. The Unix version for the same activity is also discussed in the book and my experience on doing that will be put here as well.

Setting Up Connection

Alright, since we are conducting forensics on a live machine, that means all the collected data must go to other machine. We don’t want to compromise the data by adding some rubbish as a result of our activity. So, the first thing to do is to set up a connection from the “victim” machine to our “collector” machine. I used the all powerful netcat to do this.

“collector” : nc -l -k 9999 >> collect.dmp

This means that we open port 9999 in the collector’s machine, which “LISTEN” (-l) to a connection and will keep listening (-k) until this process is terminated (ctrl-c) in the collector’s machine. Any data/text that are is sent to this port will be appended to the file collect.dmp

“victim” : <command> | nc <collector’s IP> 9999

This means any output as the result of executing a command(<command>) in the victim’s machine will be piped (treated as input) to the nc command. This output will then send to the collector’s machine (<collector’s IP>), specifically through the open port (9999).

Knowing The Time

Knowing the time, where the forensics activity is conducted on the victim;s machine, is very crucial. In Linux, we can use the ‘date’ command. Since we want this on our file collect.dmp, that makes the complete command as:

date | nc <collector’s IP> 9999

Identify Active Network Connection

We would like to know whether the security breach was conducted remotely or locally. This activity may reveal some suspicious network connections. The command to execute is: netstat -an. Try to identify which ports are open by design, and which aren’t. Unfortunately, the website http://www.portsdb.org is no longer exist as a port identifier to hel with this activity. However, I think we can just use google search.

Which Application is Responsible

Ports are opened by applications. Hence, it is important to have information on which applications actively open the ports. The command ‘netstat -nab’ may provide the required information. Note that this command must be executed with administrator role. A (free) third party application that can be used to get the same information is ‘fport’ from foundstone.com. However, when I tried this software under Windows 7, it didn’t give me the expected result.

Who is connected (machine)

Windows resource sharing usually accessible to the authorised user and can be identified by its NETBIOS name. The other way around, users who access this resource also can be identified by their NETBIOS name. The command ‘nbtstat -c’ reveals the connected computers (cached). However, note that NETBIOS name can be easily changed, so it may not be reliable by itself.

Caught in The Act

If we’re lucky, we may spot the intruder in the act. Inspect the current logged in user by using the software ‘PsLoggedOn’ which is included in Sysinternals Suite from http://www.sysinternals.com. Now, judging by the number of occurence of sysinternals word in the book, I am certain that this collection of tools in worth to have.

Where Did All My Traffic Go ?

Let’s check if our traffic has been redirected by the intruder. The command ‘netstat -rn’ or ‘route print’ can be used to reveal routing information. Make sure that the outcome is desirable, and take note if there is any anomaly.

What is Running Now ?

Well, I guess it is no secret that we need to identify the current processes. The tool ‘pslist’ from Sysinternal Suite fit the purpose. Watch for the process name, user time, kernel time, and elapsed time. Suspicious activity can be spotted by comparing the elapsed time with others which show the trusted process elapsed time. Trusted processes may run in the same elapsed time since they all started during boot process.