Finding, reporting and avoiding vulnerabilities

12 minute read

Updated:

14. Finding, reporting and avoiding vulnerabilities

Today’s objectives

  • How we discover and categorize vulnerabilities
  • How do we find them in existing code?
  • How do we best avoid introducing them in the first place?

Vulnerabilities

  • Vulnerability is a potential weakness and is not a guaranteed problem until it’s turned into an exploit
    • Exploit is an attack on a computer system that takes advantage of a vulnerability – and is a vulnerability that’s proven to work
  • Not all vulnerabilities become realistic exploits as there are reasons as to why it may not be practical to use by an attacker
  • ‘Arms race’ between security researcher and attackers – looking at software trying to find vulnerabilities and find whether they are exploitable
  • One of the things in ethics was the idea of responsible disclosure – practice for security researches aka white hacks, whereby they inform a software vendor with the vulnerabilities they found on a software product privately, and they go public with it after a given time period such as for 90 days,

  • Q - Why do they follow this practice?
    • They notify the vendor with the problem and give them 90 days, and if they don’t improve, then there’s a clear risk it puts pressure on the vendor to take it seriously and give them fair amount of time to fix the problem then they go public
    • They go public for responsibility - so that ppl know what they’re using something that could be exploited.
  • balancing two diff. responsibilities
    • Responsibilities to give creator with fair time to fix the problem
    • And to notify the user that they are aware that they are using sth that is attackable
  • 0day – important terminology– exploit that is being actively used by attackers before researchers and vendors discover it Worst case scenario

The Vulnerability Cycle

  • image
  1. Vulnerability is discovered and there’s a responsible disclosure and the vendor is informed
  2. There’s a period where the vendor is searching for a fix and the attackers can also discover vulnerability and they could be trying to crate an exploit that can use to attack arms race
  3. If the exploit appears publicly, if the bad guys win, they get media attention
  4. Eventually, there will be a patch that can be applied to fix the vulnerability
    1. The question is how many ppl apply the patch immediately? Not many.
    2. It’s common to not immediately patch? - coz it might not be stable. If you are running a system and it needs to be up and running 24/7, applying the patch could break things. If the patch is made available, and you are 100% sure it works then fine. But of ur sys goes down bc of the patch, ur buiz can lose more money due to that then the bug.
    3. solution to that is you can have another realistic copy of the system that is not a live server, and is for testing purposes and see if it’s fine over there.
  5. When vulnerability is discovered in one code, it’s likely that it appears somewhere else in the code
    1. E.g. it could be a library function and the library is affecting all codes, or a piece of code could be copied and modified someway and is used in all parts
    2. Vulnerability triggers the discovery of others and that can exploit other products before the fix is found back to state 2 and do it again
  6. Worst case – if a tool is created to automate the exploit on unpatched system.
    1. This means non specialist can mount attacks – like script kiddie, and this can widely increase the scale of the problem as more ppl is mounting attacks
  • Only way is to break the cycle is to build software that is inherently more secure systems that is harder to break, requiring fewer patches ultimate solution

Standardization

  • ISO 29147, ISO 30111
  • Provides guidelines to vendors to follow on how to handle vulnerability reports and what approaches to be followed between receipt of a report and disclosure to the public
  • Idea is if ppl follow the standard its clearer to everyone when dealing with these situations
  • Its not freely available – not open standards in that sense

Aids to discussion/ analysis

  • In addition to standardised approaches, there are tools to help us analyse vulnerabilities
  • CVE – Common Vulnerabilities & Exposure One of the most imp
    • Classification scheme which allows to identify vulnerabilities. And once they’ve been recognized as real vulnerabilities, they are numbered to be tracked and identified CVE identifier
    • E.g of the website – you have a list of them by ID or keywords and get a list of name and description.
      • CVE-YEAR-NUMBER of 2019 in order its been discovered.
  • CWE – Common Weakness Enumeration
    • For less vulnerabilities, and is a way of classifying vulnerabilities by type
    • CWE 120 – buffer copy without checking size of the input
  • Scoring system
    • Useful for prioritising vulnerabilities and way of measuring the significance of vulnerabilities
    • CVSS – 0 to 10. CWSS – 0-100 and both are non-trivial calculations
      • CVSS – you select all the metrics and impacts, and it gives you the score.

Dissemination

  • Other important thing is how vulnerability info is disseminated to ppl
  • US-CERT weekly summaries and in-depth technical articles about general cyber security
  • NCS UK UK equivalent. Centralised station to provide warnings and bulletins and summaries.
  • BugTraq Mailing list –– extremely comprehensive and only some of them will be appearing on US CERT and NCS.
  • Software vendors themselves have pages on the website with security advisors and have download links for patches

Questions

  • How do we find vulnerabilities in existing code?
    • First it requires a strategy to analyse existing source code
    • The modifying software engineering approach to better account for security
  • How do we minimized the risk of them being introduced?

Source Code QA techniques

  • To analyse existing source code there are techniques there are ways of using human beings and software
    • Code review
    • Checklist
    • Static analysis tools

Code review

  • One way to find bugs
  • Exploits the ‘Hawthorne effect’ - When ppl are being observed they work harder.
    • Idea here if ur code is being reviewed you are more careful of how you write ur code.
  • Various approaches
    • Pair programming - Agile strategy – every single line of ur line is written by 2 ppl and not by same pair. Every line is touched by more than 1 programmer and hopefully increases the chance to find out the error
    • Peer review – where you have set meeting where you review code and go through code, and you can make security issues a part of the process – work best in small amt of code and small code
    • External review – highly formalised – for safety principle systems, ud want independent and unbiased views more objective. But it all depends of the specialty of ppl to spot out security issues
    • If you want it to be really careful, you can tie it to commits/ pushes
      • Code to be reviewed before committing to master, and version control provides ways of flagging merges requiring manual reviews
  • Is it worth the effort?
    • With code inspection, you can save about 9 hours of testing and debugging
    • Code reviews are 20-30 times more effective than relying solely on testing on finding bugs
    • These are general studies of bugs, but is it any different with security?

The ‘Many eyeballs’ fallacy

  • Linus’s law – “Given enough eyeballs all bugs are shallow”
    • Given enough ppl, it should be easy to find the bug
    • It sounds obvious, but it lacks evidence
    • If you look at Open Source projects and number of ppl involved, the bug finding rates do not scale linearly with number of reviewers which is expected form linus’s law
      • There is an effective number of reviewers, and you can’t just improve the bug finding by adding more ppl does not scale linearly with no. of reviewers
      • Caveat – you need ‘right kind of eyeballs” they don’t have sting motivation to find security bugs and training. And for them its not sexy finding bugs, and security bug requires a very specific knowledge
  • Q: how long did the Heartbleed vulnerability exist in Open SSL before discovery??
    • A – 2 years!

Checklists

  • E.g. Bc there’s a lot of things that could of wrong with aircraft flights, and its extremely important to be checked before take-off – checklist is always checked
    • Can we use same technique for security?
  • Nice thing is that it ensures you to to not forget about important issues. i.e. format string bug
  • It’s useful during design and security testing so that you don’t introduce bugs
  • you can sign it off formally – managers so that you can blame !!
  • But they can’t address every issue
  • And they need to be maintained to remain useful as software may change, so somethings may no longer be relevant and new vulnerabilities can emerge.
    • There wasn’t a format string bug in 1970s – and ppl need to look at code and printf to see that you are not using it not safely
    • So its good to put them under version control to track changes

Static analysis tools - software

  • Static analysis - you are not actually running the application
  • Lexical analysis
    • Token like structure of the program, and these tokens are looking for particular type of elements e.g. function calls. And you look for if a certain function call is danger
    • Flawfinder (C, C++) written in python
    • RATS – written in C for more programming languages
  • Parsing
    • Involves understanding the structure of the program in more depth
    • Split – linting program to warn you about programming style
  • Dataflow analysis
    • you analyse the dataflow, and if you find a place where a buffer overrun can happen. you don’t necessarily know if it could happen, you just know it might be. And if the attacker can’t affect what happens to that buffer, then its probably not an issue, which is more of a general bug than security
    • you need to identify places where the user input is affecting that bit of code and this requires dataflow analysis.
    • Plugin for java – SpotBugs
  • ‘Separation logic and bi-abduction’
    • Using mathematical ideas of logic and logical inferences to find how program uses memory
    • Facebook’s infer
  • Idea is, these programs help you analyse the program without running it to help you find potential bugs

Example

int main(int argc, char* argv[])
{
  char buf[256];
  if (argc > 1) {
    strcpy(buf, argv[1]);
    printf(buf);
    printf("\n");
}
  • ./ flawfinder hello.c – you are gonna analyse this using stack analysis. What would It highlight?
    • (buffer) char – statically sized arrays can be improperly restricted leading to overflows – use bound checking
    • (Buffer) strcpy - doesn’t check for buffer overrun
    • (format) printf - format string can be influenced
  • flawfinder gives a big output for long codes and gives recommendations. But if there’s 100s of them, there’s a danger that you are tired of it and you don’t care. And some of them are false positives as they don’t understand the dataflow of the program

More examples

  • Rats sample.c
    • 300 diff things that are looking for in the code.
    • Recognizes error and discover potential vulnerability.
    • If UID is set, it runs with the permission of the program OWNER not machine owner – so have super high privileges and can use access function. If the attacker was able to remove the file and have a symbolic link to the file, it will overwrite the pw file! exploiting ‘Time of Check and Time of Use’

Limitations of static analysis

  • False positives – flags sth that isn’t a risk
  • False negatives – it may not flag and it misses
  • It needs to have a flaw database and it may be out of date - need for maintenance
  • Source code is required which may not always be available

Testing != security testing

  • image
  • Testing and security testing are different things. When you have 2 Venn diagrams and difference of intended behaviour and observed behaviour they don’t perfectly overlap
  • The overlapping middle – correct and secure behaviour
  • Blue - you intended to happen, but wasn’t observed. Conventional testing technique reveal this.
  • Red – stuff you observe that didn’t intend to happen – software bugs lie here!! – security testing
  • And you have to think differently about testing to get into this part of the diagram. Try things that users might not think of it! E.g. Like using suuuuper long inputs

Whittaker’s fault model

  • Application interacts with OS via the OS kernel, but there are interactions with API, UI< and file systems as well. All of these need to be considered from security standpoint.
    • E.g. API’s other components, filesystems configuration file, etc.
  • But when we develop, we tend to focus on UI and input and less on invisible components. So there are tools to help this! you need to investigate faults of all different sources!
    • What happens when memory gets full? What if dynamic library can’t be loaded? Need to test these too!

Run-time fault injection

  • Tool to inject faults in run time and simulating these faults!
  • Can simulate faults using software that intercepts and modifies system calls
  • Example – holodeck – interface of different fault injection
    • System monitoring
    • Fault injection of:
      • Insufficient memory, failure to lock memory
      • No disk space, too many open files, no disk in drive
      • Low bandwidth, network disconnected, no ports
      • Missing libraries

Testing using Whittaker’s model

  • Attack environmental dependencies
    • Block access to libs; manipulate Windows registry; corrupt config files; simulate lack of memory, disk
  • Attack user interfaces
    • Overrun buffers; use escape characters; inject code;
  • Attack the design
    • Find unprotected accounts; connect to all ports; fake sources of data; explore alternate routes to functionality
  • Attack the implementation
    • Exploit TOC-TOU; force all errors; uncover test APIs; screen temporary files for sensitive information

Using threat models

  • Threat model drives the testing process.
  • Each threat needs a test plan
    • Need to have a test plan to each threat we identify, so that before releasing it could be checked.
  • STRIDE tells us what types of test to perform
  • Quantitative risk assessment (e.g. Dread rating) can be used to prioritise test plans

Secure software engineering

  1. Analyse requirements
  2. Produce a design
  3. Implement the design
  4. Add security
  5. Test
  6. Fix bugs
  7. Ship product
  • classic waterfall approach. what’s wrong with this approach?
    • Can’t bolt security testing in btwn. It needs to be there from day 1.

Secure from day one

  • Security (re)training for team members, ensuring they have up to date security knowledge.
  • Threat model begun at the earliest stages, when product is still being envisioned and refined thereafter
  • Design process is driven by threat model we created and adopts standard principles such as reluctance to trust, granting of minimal privilege
  • Designs undergo security reviews
  • Code Quality Assurance (QA) and testing
  • ‘Security push’ in later stages – time where they exclusively focus on security bugs.
  • Full security audit before shipping

Summary

  • Discussed the vulnerability cycle, and tools that help in the discussion and analysis of vulnerabilities
  • Explored the roles played by code review and testing in the prevention of vulnerabilities
  • Considered the value of static analysis tools
  • Discussed how software engineering processes need to change in order to address security concerns

Leave a comment