4.3 Answering Research Questions
4.3.4 Answering Research Question 4: Attack Limitations and Counter-
wherePcloud is the cost per hour of a virtual compute unit. Tcloud is the computing time to process 1 unit of data with one virtual compute unit in the cloud.
A Cost Model for Botnet-based Attacks. The maintenance cost of botnet-based attack MCbotnetis similar to Web Worker attacks. However, the bots will not remain active forever.
Attackers will lose control over some bots as time goes by due to the awareness of users or anti-virus scanning. We define s as the average percentage of bots lost each day. To maintain the botnet of same size, attacks will have to pay an additional cost Closs =s∗ Pbotnet∗t.
MCbotnet =Pserver∗t+Cbandwidth+Closs (4.6)
The acquisition cost of a botnet-based attack ACbotnet is the cost to purchase the botnet.
ACbotnet =Pbotnet∗n. Pbotnet is the unit price to rent or buy bot. n is the number of bots needed to launch an equivalent attack to Web Worker attacks.
user closes the tab, the botnet control terminates immediately. Since the page viewing time of users is often short and unpredictable, browser-based botnets are also transient and unsta- ble. There is no doubt that traditional botnets are still favored as they can take a persistent foothold on a victim’s machines with fewer restrictions. However, browser-based botnets have their own advantages. With continued improvements in JavaScript performance and increases in the number of web-connected devices, Web Worker-based attacks might be- come a significant attack vector. The threat could be potentially great if ad networks are exploited heavily to launch these attacks.
To prevent attackers from misusing online advertisement and launching Web Worker attacks, ad network providers could enforce stricter review processes on the HTML5 ads.
However, since Web Worker attacks do not use any dangerous JavaScript functions and do not harm the end-user directly, it can be difficult to distinguish them from benign code.
Attackers can also use code obfuscation techniques to make detection even harder. In addi- tion, manual review or machine review can pose a large burden for advertisement networks given the large amount of ads being submitted. Another strategy to prevent online ads becoming a hotbed for Web Worker attacks is adjusting the pricing model. As our cost analysis in Section 4.3.3 has demonstrated, online advertisements are only economically attractive for attackers when they can achieve very low click-through rates and low costs per click. The ad network providers can penalize ads with low click-through rates to help disincentive this type of attack.
One possible countermeasure to Web Worker attacks is to enforce Content Security Policy (CSP). Content Security Policy is a new standard to prevent XSS attacks by defining a field in the HTTP header so that a server can specify the domains that browsers are allowed to make requests to. If the server restricts the white-list domains to only itself, then a Web Worker will not be able to make DDoS requests or contact with a coordinating server for password cracking. However, very few websites currently implement this feature because websites are rely heavily on external resources such as images, Ads, and other
services. Maintaining a white list of legitimate domains requires tremendous effort.
For Web Worker DDoS attacks, one way to mitigate them would be to enforce stricter limits on browsers in terms of the number of concurrent requests, number of references to non-local objects, and sustained usage of concurrent requests over time. For example, it may make sense to send a burst of 100-200 requests to several different hosts when a web page first loads, but a sustained request rate of 10,000 requests per minute is not indicative of legitimate behavior. However, these restrictions need to be carefully balanced with the needs of legitimate websites where it is common for webpages to grab resources from different domains for displaying advertisements, accessing third-party APIs, etc.
Countermeasures to application-layer DDoS attack are more difficult than network layer attacks due to the challenge of easily differentiating malicious and non-malicious traffic. One differentiation approach is to try and distinguish between real human traf- fic and bot traffic by comparing traffic signatures such as request rate, IP, HTTP headers, JavaScript footprint, etc. What makes things more complicated is that there are “benign bots”, such as search engine crawlers, monitoring tools, and other automated request gen- eration infrastructure. Traffic classification may misclassify these bots and affect website search engine optimization or monitoring.
For computationally intensive tasks, such as password cracking or rainbow table gen- eration, countermeasures will be tricky to develop. We cannot simply block JavaScript or Web Workers because they are widely used in regular webpages. One possible solution is to monitor the system resource consumption and warn users of abnormal behavior. For example, browsers could have a plug-in which displays the CPU utilization rate or network traffic for each webpage/tab and raise an alarm when viewing a page that has abnormally high resource consumption.