The 3 Most “Resistant and Persistent” Application Security Vulnerabilities!!

 

My team tests at least 5 applications a week on average. We constantly work with Web Apps, Web Services, Mobile Apps and now, IoT driven applications, which have a pretty large web services layer supporting it. We work with multiple product engineering teams, especially developers, to help them fix those niggling security problems.

Recently, I had a question that I wanted answered in substantive terms. “Which vulnerabilities are most resistant and persistent across all the apps that we test?” This is a pretty expansive question. We test scores of apps, that have a larger set of vulnerabilities. I was looking for application vulnerabilities that either haven’t been fixed over time or have been fixed at a given time, but have resurfaced elsewhere. These vulnerabilities, I would put into the “Resistant, Persistent” category. I loaded our sanitized vulnerability metadata onto an Elastic Search Server. And armed with my python scripts for analytics and aggregation, I crunched some numbers and I have tried to drill down to the 3 Most Resistant Application Security Vulnerabilities, from January of 2015 to the present day.
1. Cross Site Scripting (XSS):

There’s nothing new about Cross Site Scripting. Its been a permanent fixture on the OWASP Top 10 as far back as I can remember. One would think that Cross Site Scripting (XSS) would have been fixed or at least marginalized by this point. However, unlike SQL Injection, Cross Site Scripting has continued to enjoy relevance and multiple leases of life. This is simply owing to the fact that the application attack surface has increased significantly with tons of third party Front-End JavaScript libraries, inconsistent output encoding, dependencies on CDNs, etc. Cross Site Scripting has proven to be a formidable vulnerability to fix. Nearly every client we test seems to have at least one or more significant Cross Site Scripting flaw. This, despite the fact that modern web frameworks come with “batteries-included” approaches to validation and output encoding. So, if you’re app is using any of the above elements that I just mentioned. You need to probably look for this highly resistant and persistent security flaw.

2. Insecure Direct Object Reference (IDOR):
These really are Authorization Flaws. Using these flaws, an attacker can bypass permissions management controls and gain unauthorized access to sensitive information from other user accounts or other data sets. The major manifestations of IDORs happen in two ways. First (uncommon) type where the attacker is able to manipulate model data and gain access to privileged functionality. Second (common) type where attackers can identify primary key/identifier values and attempt to gain access to other user accounts or elevate privileges. The reasons for this vulnerability being “resistant and persistent” is due to the following factors:
There’s a lot more impetus given to authentication than authorization in the design/architecture of an application. What’s worse is that authorization is a highly design centric activity that is usually done poorly. It’s not granular enough, it’s not comprehensive enough and its coverage is inadequate. So, its doomed to fail.

  • With Web Services (where’s there’s no UI Control) this is rampant because developers just don’t expect/realize that there are these bugs in the system.
  • These vulnerabilities cannot be tested directly with Automated Vulnerability Scanning tools like AppScan, Burp, etc. They have to subjected to manual/scripted security testing with special impetus on authorization testing.
  • The other (probably smaller) reason is that developers don’t design primary keys to be truly random. Some of them are basic incremental integers (1,2,3) or what they think is random (20171121001), which I am sure by now, you realize is anything but.

Direct Object Reference flaws can be deadly. They need to be understood and addressed the right way.

3. Cross Site Request Forgery (CSRF/XSRF):


Cross Site Request Forgery is really an attack against Authentication. In short, an attacker is making the user do things the user never intended to do on your application. This could be anything from forcibly changing the user’s password to adding an unauthorized rule on a firewall web console. Most of the web apps we test, CSRF is a common finding. The effects of a CSRF are only aggravated with XSS on the same application. To developers who think that CSRF only works on browser based web apps, think again. Web Services can equally be affected by CSRF Attacks.

Disclaimer: What I have written above, is in no way a comprehensive list of application vulnerabilities. These are just the 3 applications that my team and I are seeing more frequently than others in modern applications. These vulnerabilities happen to be both resistant and persistent because they seem to either stay unmitigated (for long periods of time)

3 TAKEAWAYS FROM THE QATAR NATIONAL BANK BREACH

The big story this week is around the “alleged” Data breach of Qatar National Bank (QNB). The attack came as a bolt from the blue with the attackers releasing a massive data-dump of over 1.5GB on the open internet. The file was available for a short while as a zip file that could be downloaded by all who could find it. While it isn’t clear if this is the entire dataset extracted from QNB, the data exposed on the internet is a significant quantum of data that has had the obvious effect. I am sure that customers of the bank would be “running for the hills” as several details concerning customers, especially high-profile customers have been released on the internet, including their account information, financial transfers and so on.
I have had a look at the data dumps from the breach and they are not pretty. Here’s what I think happened and what other companies can learn from this breach.

Poor Data Protection

As per reports, Attackers have definitely used SQL Injection as one of the modes of attack. There have been clear logs from tools like SQLmap that have been run against a Java web application that is querying an Oracle Database. The SQL Injection seems to have been trivially exploited with UNION queries being used to exploit the SQL Injection and extract a ton of data from the back-end Oracle Database.

THE 10 STEP APPLICATION SECURITY TEST

Securing apps is a major challenge and achievement for any organization. For an app to be secure, it should not only be developed securely, but the entire lifecycle must have infusions of security. I use the word ‘infusions’ because the concept of ‘controls’ and ‘requirements’ seems highly cumbersome.Nevertheless, Security in Applications can be quite difficult to grasp. Try reading any documents around “Secure Software Development Life cycle” or “Secure SDLC” and you will find your head spin with words like “Threat Modeling”, “STRIDE”, “Trust Boundaries”, so on and so forth. Many people I meet are overwhelmed by any talk of security in apps, as a result. If you google “Application Security Maturity Model” or something similar, you would need a lot of coffee trying to get through these documents.

Keeping that in mind and taking inspiration from Joel Spolsky’s 12 steps, I have created a quick and dirty ‘checklist’ of sorts for building secure apps. Now keep in mind that this is a simple ‘Yes’ or ‘No’ test. The idea is that you go over this checklist and make mental answers of ‘Yes’ or ‘No’ as you go along. If you find that your scores are below 7, then you need to fix some of the deficient items in your application security processes/practices. If your score is 8 and above, then you can be sure you are in decent shape, with some room for improvement. Similarly, if you are at 2 or 3, then your application security practices need serious intervention. So, without further ado, let’s get into it.

Abhay’s 10 Step Secure App Test

  1. Do you include security requirements other than “authentication” in your Initial Requirements spec?
  2. During interviews, do your developers answers questions on security like “Protection against SQL Injection or CSRF”?
  3. Do you do hybrid security testing?
  4. Do you do security code reviews, at least once a year?
  5. Apart from Use-cases, do you create “Abuse cases”?
  6. Do you subscribe to security feeds from your application platform(s)?
  7. Do you do a security check for each release?
  8. Are your devs and architects trained on application security?
  9. Do you maintain a Secure Coding Standard for your Developers?
  10. Does your management understand/appreciate the impact of an application security breach?

1. Do you include security requirements other than “authentication” in your Initial Requirements spec?

Most companies creating software would have software specifications. Call it what you will (I have heard of documents from High Level Document, Requirements Document, Design Spec, etc), very loosely, a document of this nature provides some input into the functionality of the application and the key components of the application. This is typically done at the Requirements gathering phase for each project or sprint (in case its AGILE, etc). If your document contains basic security requirements like authentication, with statements like “Application shall have unique usernames and passwords” or basic authorization with statements like, “Application shall enforce permissions and privileges based on Role Based Access Controls”, then the document grossly under-represents application security needs. Application Security has several areas that need to be included, even at a high level or in a Requirements document. Aspects of Logging and Correlation, Encryption (with specifics), Sensitive Data protection requirements, secure coding requirements and so on must be included in the Application’s requirement document or initial spec.

2. During interviews, do your developers answers questions on security like “Protection against SQL Injection or CSRF”?

Technical questions or a “show and tell” style of interviews are important when hiring developers. Developers are asked a series of questions and sometimes, even required to write some proof of concept code to show their skills in developing apps. Security is equally important. If your developers don’t know what a “Host Header Injection” attack or “Cross Site Request Forgery” is, they can’t possibly be expected to develop an app that is resilient against modern attacks. Developers already have tight schedules and delivery timelines. Lack of basic application security skills (especially in the platform  of their competence) only exacerbates issues for your application’s security. I strongly believe that developer interviews and technical discussions must have a generous peppering of application security related questions like the OWASP Top 10 Flaws or similar.

3. Do you do hybrid security testing?

I have seen several companies that put their application through an automated app sec vulnerability scanner and to go production after the scanner has thrown a clean report. This is not nearly enough. With apps getting more complex (RIA frameworks, web services, middleware, caching, queuing, etc), the need for both manual and automated security testing (called Hybrid testing) has increased. A skilled penetration tester can find way more issues than an automated vulnerability scanner can. If you are only using an application security vulnerability scanner (not not using anything at all), you should seriously consider a hybrid application security test (penetration test).

4. Do you do security code reviews, at least once a year?

Penetration Tests are useful. Extremely useful. However, security code reviews are essential to securing applications. Security Code Reviews could be peer-reviewed, with a static/dynamic code review, or expert reviewed. Either way, the important thing is that risky code is being evaluated and red-flagged for developers to fix.

5. Apart from Use-cases, do you create “Abuse cases”?

A Use case is a scenario that represents a user’s interaction with a system and steps of said interaction. This is something that most of use in the world of software. An Abuse case is a scenario that represents a user’s (attacker’s) attempt to compromise the app. Abuse cases are detailed threat scenarios that represent possible attacks that might lead to an application getting compromised (from a Confidentiality, Integrity and Availability perspective). This is essentially what Threat Modeling is all about. Threat Modeling is a practice of identifying threats, vulnerabilities and attack scenarios that could affect your application. Security controls and countermeasures are identified and chosen based on these threat models.

6. Do you subscribe to security feeds from your application platform(s)?

Platform code, External libraries and API forms the bedrock of most modern applications. However, often these components are affected by pernicious vulnerabilities that can have widespread consequences (think Heartbleed, POODLE, Shellshock, Mass Assignment, or something as simple as a vulnerable WordPress Plugin). Its important that you have information about vulnerabilities in your application’s platform and dependencies in a timely manner. You can subscribe to security feeds from your local CERT (like the US CERT or India CERT), MITRE, NVD or from a security update service like Sucuri.

7. Do you do a security check for each release?

In my experience, each product release has the potential to introduce new security issues and vulnerabilities. Some companies never emerge from the chakravyooha(maze) of security vulnerabilities and their security woes go around in circles. This is primarily because they do not incrementally test releases for vulnerabilities. Security testing for releases not only reduces the burden on the developers, but makes application security a focused and iterative process that is achievable and certain. Just like you would do functional testing or integration testing for a release, security testing per release would make your life a whole lot easier.

8. Are your devs and architects trained on application security?

This extends from point 2, but differs in some ways. It is not only important for your developers and architects to be aware of application security concepts and practices, they need to be trained periodically on these concepts and practices. Application Security is constantly evolving, with newer attacks and vulnerabilities being identified by researchers and blackhats all over the world. Your developers and architects have to be formally trained on application security, either workshop style or through a series of training capsules.

9. Do you maintain a Secure Coding Standard for your Developers?

A famous line from the movie, “The Ten Commandments” is “So it shall be written, so it shall be done”. This dialogue was probably written by a QA guy somewhere. But jokes aside, this is important. A Secure Coding Standard is an essential guide for your developers. For newer members of your development team, this becomes a valuable introduction to your company’s security coding practices and application security practices. For experienced developers, this is a great guide to follow and enforce consistency across the lifecycle. A secure coding standard should convey the coding practices that your developers should follow for I/O operations, Input Validation, Encryption and Integrity, Output Encoding, Request Authentication, Authorization checks and so on.

10. Does your management understand/appreciate the impact of an application security breach?

One of the worst things that can happen to your company is to have management that does not understand/appreciate the impact of application security breaches. Application Security and Information Security are top-down practices that begin in the boardroom. Management that is not aware or concerned with application security results in a company that would probably end up being a breach statistic.