NASA’s Software Assurance Research Areas for FY20

NASA’s Software Assurance Research Areas for FY20

7-minute read

The NASA Software Assurance Research Program (SARP) is sponsoring five research projects aimed to benefit Software Assurance processes across the agency. The research program is aligned to support goals to improve how NASA performs Software Assurance activities.

SARP directly supports improving the risk, issue, and finding reporting from the NASA Software Assurance and software safety organizations; adding value for Software Assurance and software safety activities; demonstrating the importance of the NASA Software Assurance activities; providing standard tools and services for Software Assurances activities on projects; focusing Software Assurance activities on known software issues, including targeting Software Assurance and software safety research activities; and developing more efficient and automated methods for Software Assurance activities.

SARP is a Headquarters Software Assurance program delegated to the Independent Verification and Validation Program’s Safety and Mission Assurance Support Office. It addresses fundamental Software Assurance problems in the field of Software Engineering and helps NASA Software Assurance personnel stay current with new practices, methods and tools needed to produce safe and reliable software. The researchers have a year to develop, analyze, test and record findings, which they will share across the agency and present to the Software Assurance Working Group (SAWG) at the end of the year.

“SARP gives those in the NASA Software Assurance community a chance to explore issues they have, find ways they can improve Software Assurance, and create new tools to do their jobs more efficiently,” said Scott Benton, SARP manager.

Each year, the SAWG identifies initiatives based on current needs in the Software Assurance community, collects research proposals, evaluates their intent and awards those that would best serve the SAWG objectives. Some Software Assurance issues are immediate and others that SARP awards are longer-term, exploring where software development and Software Assurance need to be in the next five years or so.

The following are overviews of each Fiscal Year (FY) 2020 SARP project:

Software Analysis Enhancement With Bayesian Belief Networks

This project will research ways that Software Assurance can use Bayesian Belief Networks, in conjunction with human input to enhance software analyses. Researchers plan to begin the project by exploring how a Bayesian Network could make static code analysis more time-efficient by incorporating iterative human feedback on the analysis results.

A successful demonstration by Mayur Naik, Sulekha Kulkarni, Mukund Raghothaman and Kihong Heo from the University of Pennsylvania showed that the concept could work. They built a software tool called “Bingo,” which, by use of Bayesian Networks combined with human interactions, ranks static code analysis alarms by the likelihood of an actual issue. They were able to demonstrate significant time savings in static code analysis by finding 100% of true positive alarms after checking only 30% of the alarms.

Bingo reasons the output of a static code analyzer, considering not only the alarms produced but also the underlying reasoning behind each alarm. A derivation graph captures the reasoning in static code analysis, which Bingo converts into a Bayesian Network. Bingo treats the alarms probabilistically by first randomly assigning weight confidence scores to the various aspects of the code analyzer's reasoning.

In order for the confidence level to have any real meaning, a person must manually inspect the corresponding code and inform Bingo of the alarm's "ground truth." The alarm is either a true positive or a false positive. Given this ground truth, Bingo can assign more meaningful confidence scores to alarms that have similar reasoning behind them. A person starts with the alarm of the highest confidence. As the process continues, the alarms that have the highest probability of being a true increase in priority. Therefore, an analyst should be able to deal with most of the true alarms but only have to analyze a comparative few. 

The concepts to be explored are

  1. The use of Bayesian Belief Networks to combine results of a static code analysis with human interaction.
  2. The application of the Bayesian Belief Networks to analyst findings in proactively finding problem areas in the software.

NASA Secure Software Engineering Portal, Year 2

This work is a continuation of the FY19 SARP effort to develop a NASA Secure Software Engineering Portal (SSEP). The SEEP is a web-based application that provides information about software security threats and weaknesses. The portal guides users in selecting and using Software Engineering and security-related practices, such as methods, techniques, tools, processes and personnel-related measures.

In FY19, the researchers elicited requirements, started the design and development of the SSEP, and produced a functional proof-of-concept showing some of the content and basic features of this application to stimulate users’ interest and gather their feedback. The input received so far from various stakeholders showed enthusiasm for easily finding relevant software cybersecurity information that could be trusted and used on NASA software. 

The proposed work will contribute to the development and safe execution of NASA software systems and the protection of mission assets by increasing software resilience to cyber attacks. The portal will assist users by building in security throughout the software development life cycle, realized by increased expertise for the integration of Software Engineering and software security practices based on knowledge provided and informed decisions supported by the portal.

Toward a Guide for Software Defect Tracking, Modeling and Analysis

Currently, NASA-STD-8739.8, NASA Software Assurance Standard requires reporting trends in software Quality metrics but does not provide a detailed approach. Moreover, many projects collect and maintain such data in their databases (such as defect reports), yet there is no direct and effective use of this defect data. Therefore, guidelines to track, model, and analyze defect data are needed. 

The goals of this project are to develop models based on past NASA project data. Specifically, it seeks to

  1. Analyze and assess current defect collection methods for two existing NASA defect datasets.
  2. Develop a defect discovery/resolution model that connects software Reliability models to the needs of practitioners.
  3. Model the defect life cycle to identify potential areas for process improvement and Risk Management.
  4. Document the models and their application to NASA data in a research paper that communicates lessons learned and offers guidance to apply the models to ongoing and future programs.

NASA-STD-8739.8 references software Quality metrics such as defect density, but these metrics enjoy limited use in practice. Although defect density is a Reliable indicator of software Quality and Reliability, a single density value offers virtually no additional power to identify opportunities for improvement. Examples of data collected during development include defect types, defect discovery time, defect resolution time, location and priority/criticality. This data, if collected, tracked, and modeled correctly, can provide Software Assurance engineers with valuable guidance to identify problematic areas and allocate testing resources.

Software Assurance Tasking Checklists for NASA-STD-8739.8

One of the stated needs of the SARP is the automation of Software Assurance activities using Commercial Off-The-Shelf tools. Monitoring compliance to the requirements, including audits, is an integral part of NASA’s Software Assurance program to assure project, contractor and center compliance with agency directives.

With the publication of NASA-STD-8739.8, the scope of Software Assurance activities changed, requiring each Software Assurance organization to be required to develop new Software Assurance checklists for performing audits and monitoring compliance at the project, center and agency level.

This initiative proposes to develop a process and tool to autogenerate Software Assurance checklists based on NASA-STD-8739.8 in a Microsoft-based Excel format. Project attributes, such as class or project phase, can be used to generate the checklists. Researchers plan to develop techniques that transform the Excel file into mediums required to be used within NASA centers to allow for adaptation.

Researchers will evaluate automation techniques to determine the most efficient and effective means for the Software Assurance practitioners to use the checklists. Because the NASA Software Assurance community’s needs are the driving force behind the checklist tool development, the NASA center SAWG representatives and the Software Assurance technical fellow are the stakeholders for this initiative. The community will provide inputs as part of a requirements-gathering effort to ensure the Software Assurance needs are complete.

A prototype of the new checklists and associated methodologies will be developed based on stakeholder and NASA-STD-8739.8 requirements. Once the Software Assurance community has verified and validated the prototype and automation techniques, the techniques could be run through the process to make it operational for the agency.

Automating Software Complexity Monitoring and Security Analysis

This project will address the FY20 research topic, “software coding analysis methods and tools.” This project will be associated with the effort on Core Flight Executive/Core Flight System as a current NASA project due to its obvious impact on multiple NASA missions and other ongoing work.

Researchers propose to evaluate and identify a valid list of complexity-related metrics that can be automatically generated for consideration throughout the entire software development process as part of a modern Continuous Integration build approach within a Development and Operations workflow.

Users will select candidate complexity-related metrics. The users will determine if any correlation exists between the reported metrics for a given set of software functions/methods and the reported security defects from performing automated static code analysis against a large number of source code bases (within NASA and open source).

The research will study and identify the target software architecture areas that best support security risk identification. The research will focus on complexity-related monitoring and the possible use of generated software control flow diagrams to review critical software from a security standpoint (e.g., authentication, access control and input validation). 

The primary goal of this research effort is to establish a set of complexity-related metrics to use to identify areas of focus in software for possible security-related concerns.

To find out more about current and past SARP initiatives, visit SARP’s NASA Engineering Network or the Software Assurance web page or contact Benton.