Training and Continuing Education at ACSAC

ACSAC offers several opportunities to help you maintain your professional certification: Technology courses, the ACSAC technical program, and our FISMA training track. For all of these, ACSAC provides sufficient evidence to support Continuing Professional Education (CPE) credit claims:

  • For formal ACSAC courses and training with pre-registration, ACSAC will provide printed certificates of completion indicating the number of hours of training.
  • For the ACSAC technical program, a copy of the final program, the attendance roster, and the registration receipt are your evidence.*

Notes will be printed for people that have registered a week or more before the conference. PDF file(s) of notes will be mailed (on request) to people who did not register a week or more before the conference.

ACSAC technology courses and the ACSAC technical program (including the ACSAC FISMA training track) are a great way to meet CPE requirements!

Register Now


Technology Courses


Course M1 – Keeping Your Web Apps Secure: The OWASP Top 10 & Beyond
SOLD OUT

Mr. Robert H'obbes' Zakon, Zakon Group LLC

Monday Morning, December 5th, Half Day

The Open Web Application Security Project (OWASP) Top 10 provides an overview of the most critical web application security risks. This course introduces the OWASP Top 10 along with other risks, and discusses the techniques and practices to protect against them. References to software tools and other secure coding resources will also be provided. This course is a must if you are developing web applications, managing developers, researching web security, or simply are a security enthusiast.

This course is a half-day version of the sold out WebAppSec course taught at OWASP AppSec USA 2011. Sign up early to guarantee a spot!

Outline

  1. Introduction. Overview of the need for secure coding practices in web application development.
  2. The OWASP Top 10. From Injection and Cross-Site Scripting (XSS) to Insecure Cryptographic Storage and Cross-Site Request Forgery (CSRF) — we will cover OWASP's Top 10 Risks in detail — how these risks lead to vulnerabilities, and how to mitigate them.
  3. Beyond the Top 10. The Top 10 are not meant to be comprehensive, but to make developers aware of the most commonly encountered risks. Here we will cover additional risks and vulnerabilities that every web developer needs to be aware of, along with how to mitigate them.
  4. Gotchas, Pitfalls & Prevention. In addition to secure coding practices addressing potential vulnerabilities, there are still some underlying technologies that could result in unintended consequences. Learn about what these are and how to prevent them from being exploited.
  5. Security Tools & Resources. It's a half-day course, so you get lots of references to additional resources and tools.

Prerequisites

Some understanding of web application development may be helpful when discussing risk mitigation techniques.

About the Instructor

Mr. Robert Zakon is a technology consultant and developer who has been programming web applications since the Web's infancy, over 15 years ago. In addition to developing web applications for web sites receiving millions of daily hits, he works with organizations in an interim CTO capacity, and advises corporations, non-profits and government agencies on technology, information, and security architecture and infrastructure. Robert is a former Principal Engineer with MITRE's infosec group, CTO of an Internet consumer portal and application service provider, and Director of a university research lab. He is a Senior Member of the IEEE, and holds BS & MS degrees from Case Western Reserve University in Computer Engineering & Science with concentrations in Philosophy & Psychology. His interests are diverse and can be explored at www.Zakon.org.

Prior Feedback

Following are quotes from prior attendees of Mr. Zakon's web development security courses:

"Presented in a very structured format. Instructor knew his stuff. Good presentations."

"Very knowledgeable! Covered a lot of topics in a limited amount of time"

"The presenter was excellent. He didn't present an overload of information. The day went very quickly and I am leaving with a lot of valuable information"

"The slides were excellent - full of good code examples and explanations"

"Material that was presented was presented and covered well. Instructor is very knowledgeable"

"Handouts & presentation well organized & coordinated"


Course M2 – State of the Practice: Botnets and Related Malware

Sven Dietrich, Stevens Institute of Technology

Monday Afternoon, December 5th, Half Day

In the last few years, botnets have become part of the vocabulary beyond the technical books and papers. The ubiquitous nature of computing has allowed malicious software (aka malware) to exist on a variety of platforms, from the company server to the smartphone. By herding these pieces of malware, referred to as bots, into a large conglomerate known as a botnet, a more powerful entity is created. The people directing these botnets, called botmasters or botherders, can exert a tremendous power on the Internet: from distributed denial of service (DDoS) on companies, infrastructure, or governments, to phishing servers, distributing various forms of content including spam, and last but not least collecting keystrokes from the affected devices in the search of authentication or credit card data. Whether for financial gain, political pressure, espionage, or just because it can done, botnets present themselves as a continued problem.

The course will trace the development of botnets from exploitation of server vulnerabilities to install malware, to early forms of DDoS, and evolution to modern botnets that exhibit various topologies or live in your browser. The increase in sophistication of the communication patterns for command and control challenges the normal approaches for detection and mitigation, and these developments will be shown using real examples of botnet traffic. The course will cover the gap between state-of-art and practice and show how to bring some approaches to practice.

The student will walk away with a basic understanding of what malware is, how malware has evolved up to this point, what host and network-based techniques are employed to thwart simple mitigation and response, and what current approaches exist for detection. An intermediate student will learn the depth of sophistication of the malware. Academic researchers will appreciate the exposure to real data and experience about recent botnets (e.g. Nugache, Storm, Conficker, MegaD), a consolidated overview of the relevant papers on this topic, as well as online resources.

Outline

  1. Fundamentals and terminology. Basic networking and routing protocols; Cryptography/cryptanalysis.
  2. Introduction to malware. Droppers, agents, IRC bots, Trojans; Evolution of attack tools.
  3. Classes of botnets. What they do. How they are installed. Exploits vs. social engineering. State of the art bots. Command and control (C&C) techniques in use. Fast flux. Topologies. Centralized, P2P, tiered P2P.
  4. Network-based techniques. Building a state-of-the-art monitoring/traffic capture facility. Correlating traffic. Pcap vs flows. Limitations of the view (local vs. global). Understanding bot communication protocols. Encrypted C&C. Detection of C&C and attacks. Examples. Impact assessment.
  5. Host-based malware techniques and limitations. Packers, loaders, encryptors, and related techniques used by malware, and their countermeasures. Malware updaters, modularization. Instrumenting a host for analysis. Virtual vs. real host.
  6. Beyond the state of the practice. Research directions.

Prerequisites

A basic understanding of IP networking, network protocols, and routing as well as an understanding of computer security fundamentals is required. The course is intended to be useful to system administrators, network administrators and computer security practitioners and researchers.

About the Instructors

Dr. Sven Dietrich is an assistant professor in the computer science department at Stevens Institute of Technology. Prior to joining Stevens in August 2007, he was a Senior Member of the Technical Staff at CERT Research at Carnegie Mellon University and also held an appointment at the Carnegie Mellon University CyLab, a university-wide cybersecurity research and education initiative. He taught cryptography in the Mathematics and Computer Science Department at Duquesne University in Spring 2007. From 1997 to 2001, he was a senior security architect at the NASA Goddard Space Flight Center, where he observed and analyzed the first distributed denial-of-service attacks against the University of Minnesota in 1999. He taught Mathematics and Computer Science as adjunct faculty at Adelphi University, his alma mater, from 1991 to 1997. His research interests include computer and network security, anonymity, cryptographic protocols, and cryptography. His previous work has included a formal analysis of the secure sockets layer protocol (SSL), intrusion detection, analysis of distributed denial-of-service tools, and the security of IP communications in space. His publications include the book Internet Denial of Service: Attack and Defense Mechanisms (Prentice Hall, 2004), as well as recent articles on botnets.

Dr. Dietrich has a Doctor of Arts in Mathematics, a M.S. in Mathematics, and a B.S. in Computer Science and Mathematics from Adelphi University in Garden City, New York.

His web site is http://www.cs.stevens.edu/~spock/.


Course M3 – Code Transformation Techniques for Software Protection

Dr. Christian Collberg, University of Arizona
Dr. Jasvir Nagra, Google Inc.

Monday, December 5th, Full Day

In this professional development course we will describe techniques for software protection, i.e. techniques for protecting secrets contained in computer programs from being discovered, modified, or re-distributed. Important applications include protecting against software piracy, license check tampering, and cheating in on-line multi-player games. The attack model is very liberal: we assume that an adversary can study our program's code (maybe first disassembling or decompiling it), execute it to study its behavior (perhaps using a debugger), or alter it to make it do something different than what we intended (such as bypassing a license check). In a typical defense scenario we use code transformation techniques to add confusion to our code to make it more difficult to analyze (statically or dynamically), tamper-protection to prevent modification, and watermarking to assert our intellectual property rights (by embedding a hidden copyright notice or unique customer identifier).

Software protection is a fairly new branch of computer security. It's a field that borrows techniques not only from computer security, but also from many other areas of Computer Science such as cryptography, steganography, media watermarking, software metrics, reverse engineering, and compiler optimization. The problems we work on are different from other branches of computer security: we are concerned with protecting the secrets contained within computer programs. We use the word secrets loosely, but the techniques we present in this course (code obfuscation, software watermarking and fingerprinting, tamperproofing, and birthmarking) are typically used to prevent others from exploiting the intellectual effort invested in producing a piece of software. For example, software fingerprinting can be used to trace software pirates, code obfuscation can be used to make it more difficult to reverse engineer a program, and tamperproofing can make it harder for a hacker to remove a license check.

In the most general sense, to obfuscate a program means to transform it into a form that is more difficult for an adversary to understand or change than the original code. "Difficult," means that the obfuscated program requires more human time, more money, or more computing power to analyze than the original program. Under this definition, to distribute a program in a compiled form rather than as source is a form of obfuscation, since analyzing binary machine code is more demanding than reading source. Similarly, we would consider a program that has been optimized more obfuscated than one that has not, since many code optimizations make analysis (both by humans and tools such as disassemblers and decompilers) more onerous.

However, the tools and techniques we present is this course go further than compilation and optimization to make a program hard to understand. In contrast to an optimizer which rearranges code for the purposes of efficiency, a code obfuscator transforms code for the sole purpose of making it difficult to analyze. A negative by-product of obfuscating transformations is that the resulting code often becomes larger, slower, or both.

Code obfuscation can be both static and dynamic. Static techniques reorganize the program text itself to make it hard to analyze using static analysis techniques. Dynamic techniques use self-modifying code to attempt to thwart an adversary's dynamic analysis (such as using a debugger).

What's particularly interesting about code obfuscation is that it's a double-edged sword: bad guys use it to protect their malware from discovery, good guys use it to protect their programs from reverse engineering, and bad guys can also use it to destroy secrets (such as watermarks) stored in the good guys' programs.

The goal of obfuscation is to add so much confusion to our program that an adversary will give up trying to understand or modify it. In addition to obfuscating the code we can also tamperproof it. This means that if the adversary tries to make modifications to program he will be left with a program with unintended side effects: the cracked program may simply refuse to run, it could crash randomly, it could phone home and tell us about the attack, etc.

In general, a tamperproofing algorithm performs two duties. First it has to detect that the program has been modified. A common strategy is to compute a checksum over the code and compare it to an expected value. An alternative strategy is check that the program is in an acceptable execution state by examining the values of variables. Once tampering has been detected the second duty of a tamperproofing algorithm is to execute the tamper response mechanism, for example causing the program to fail. This is actually fairly tricky: we don't want to alert the attacker to the location of the tamperproofing code since that will make it easier for him to disable it.

A particularly interesting application of tamperproofing is remote tamperproofing, when the code we want to protect from attack resides on a remote machine and is in constant contact with our own server. For example, consider a multi-player online computer game where, for performance reasons, the client program caches information (such as local maps) that it won't let the player see. A player who can hack around such limitations will get an unfair advantage over his competitors. Another application of remote tamperproofing is protecting the Internet from rampant attacks on the TCP/IP code stack. An adversary who can hack the networking code to subvert the back-off protocol and distribute it widely could have a devastating effect on the network. Remote tamperproofing also has applications to grid computing (someone buying cycles on a supercomputer needs to protect the privacy of their inputs and outputs and the integrity of the code itself), wireless sensor networks (the nodes must collectively detect if one or more sensors have been compromised), and digital medical records (servers must be able to detect tampering of terminals in doctors' offices).

Watermarking code is a way to track programs by inserting unique identifiers into them. Specifically, given a program P, a watermark w (an integer or string), and a key k, a software watermark embedder produces a new program Pw. We want Pw to be semantically equivalent to P (have the same input/output behavior), be only marginally larger and slower than P, and, of course, contain the watermark w. The extractor takes Pw and the key k as input and returns w. The mark also has to be resilient to attacks. An adversary could, for example, subject Pw to semantics-preserving transformations (code optimization and obfuscation) in order to destroy the mark.

Watermarking algorithms are also based on code transformations. For example, algorithms have been designed based on permuting the code, inserting non-functional code, or as solutions to static analysis problems.

Designing obfuscation, watermarking and tamper-proofing algorithms requires analysis of programming languages to identify the security properties that must be maintained, understanding the limits of analysis and creating transformations which exploit these limits. As such, software protection provides a rich source of challenge problems to motivate theoretical as well as practical and experimental research.

Outline

  1. Introduction. What is software protection? What problems do we work on?
  2. Attack Models. Who is our adversary? What techniques are at his disposal?
  3. Code Obfuscation. Code transformation techniques for preventing malicious reverse engineering of programs. How do we defeat static analysis? How do we defeat dynamic analysis? How can adversaries use obfuscation to affect the results of electronic voting?
  4. Obfuscation Theory. Theoretical background to obfuscation. What can we hide in a program? What can't we hide in a program?
  5. Tamperproofing. Techniques for preventing modifications of programs. How can we stop the removal of licensing checks? How can we stop cheating in on-line games? How can we prevent attacks against the TCP stack that could potentially take down the Internet?
  6. Watermarking. Techniques for embedding unique identifiers in programs to prevent software piracy.
  7. Conclusion. Directions for future research.

Prerequisites

An understanding of basic compiler/program analyis techniques is helpful, but not necessary.

About the Instructors

Dr. Christian Collberg received a BSc in Computer Science and Numerical Analysis and a Ph.D. in Computer Science from Lund University, Sweden. He is currently an Associate Professor in the Department of Computer Science at the University of Arizona and has also worked at the University of Auckland, New Zealand, and the Chinese Academy of Sciences in Beijing. Prof. Collberg is a leading researcher in the intellectual property protection of software, and also maintains an interest in compiler and programming language research. In his spare time he writes songs, sings, and plays guitar for The Zax and hopes one day to finish up his Great Swedish Novel.

Dr. Jasvir Nagra received his B.Sc. in Mathematics and Computer Science and a Ph.D. in Computer Science from the University of Auckland, New Zealand. He's been a Post Doctoral scholar on the RE-TRUST project at the University of Trento where his focus was on applying obfuscation, tamperproofing and watermarking techniques to protect the integrity of software executing on a remote untrusted platform. His research interests also include the design of programming languages and its impact on the security of applications. He's currently with Google, Inc where he is building Caja, a open-sourced, secure-subset of javascript. In his spare time Jasvir dabbles with Lego and one day hopes to finish building his Turing machine made entirely out of Lego blocks.


CANCELLED Course M4 – Taking Advantage of Java Security Features

Jeremy Powell, atsec

Monday, December 5th, Full Day

The Java Virtual Machine and the Java standard libraries provide a wealth of security mechanisms that may be used to develop applications robust to a variety of attacks. Unfortunately, improperly deploying these mechanisms can lead to inefficient design, insecure code, and frustration to developers.

This course will provide the opportunity to learn, analyze, and discuss the major security features provided by the Java software platform. The objective is to enable software engineers to understand how to design and develop secure Java applications that are both effective and efficient.

Outline

  1. Introduction and context. Objectives. History of Java. Typical Java application security objectives. Motivation for course.
  2. Overview of the Java Language. Primitive types, classes, and object instantiation. Polymorphism and object oriented programming. Automatic memory management. Role of the Java Virtual Machine. Differences between Java and other programming languages (C++, Python).
  3. Type Safety and the Java Virtual Machine. Why is type checking important to security? Execution and memory models. Compilation - linking source code to byte code. Compile-time type checks. Runtime type checks. Native code.
  4. Secure Class Loading. The default class loader. Where things can go wrong. Dynamically loading classes. Writing custom class loaders.
  5. Instrumenting Access Control. Java Authentication and Authorization Services (JAAS). Developing security policies. Instrumenting code with access control.
  6. Case study of a production level Java application.
  7. Questions and answers.

Prerequisites

A good understanding of software engineering principles is required, as well as a working knowledge of the Java programming language (or similar). Also, it is expected that participants are familiar with the concepts of basic security.

About the Instructors

Mr. Jeremy Powell is an enthusiastic security professional focused on a wide variety of security concepts from secure operating system design to cryptography. He has worked for atsec - an independent IT security firm - in Austin, TX for the past three years. Jeremy has a bachelor's degree in computer science from the University of Texas at Austin, where he focused on computer security and advanced mathematics.

Jeremy has served as evaluator or consultant for several Common Criteria evaluation projects with atsec, including several Linux operating system distributions and two Java based enterprise solutions. He is also responsible for the penetration testing services in atsec's U.S. branch, acting as project manager and lead penetration tester.


Course T5 – Virtualization and Security

Mr. Zed Abbadi, Public Company Accounting Oversight Board (PCAOB)

Tuesday Morning, December 6th, Half Day

In recent years, virtualization has become one of the most deployed technologies in the IT field. It provides clear benefits when it comes to utilization, maintenance, redundancy and lower power consumption. However, just like every new technology, virtualization is still evolving and there are still unanswered security questions. Virtualization is a concept that encompasses many types of technologies used in different configurations and for a variety of reasons. Each one of these technologies presents its own unique sets of security challenges and benefits.

This course will provide a basic understanding of the various virtualization technologies and discuss the security aspects and characteristics of each one. It will provide the audience with valuable material on how to utilize virtualization to decrease risks from security attacks and how to avoid vulnerabilities that may accompany virtualization technologies.

Outline

  1. Virtualization Introduction. We will define virtualization, present some history and go over the various types of virtualization that currently exist in the market place.
  2. Server Virtualization. Server virtualization is primarily used to better utilize hardware platforms and allow for easier management of virtual servers. It allows for high availability and provides additional security benefits such as sandboxing and honeypots (including live forensics).
  3. Client Virtualization. Client virtualization spans several technologies and in some cases carries great security and maintainability benefits. Various techniques can be used to isolate different client components in the user environment. Depending on the need and infrastructure available. Client virtualization can take the various forms.
  4. OS Streaming. OS steaming is an old concept that has recently started to take hold again. It allows for a thin client (PC or some other device) to run through streaming of an OS stored on a networked server. The major distinction here is that the client device does not store any permanent data and completely relies on the network and the OS server to function.
  5. Workspace Virtualization. Workspace virtualization is similar in concept to desktop virtualization, however only OS and application configuration settings are virtualized, allowing for a re-configuration of an already installed OS based on user/enterprise preferences. This is considered a light weight mode of virtualization, but none the less provides some security benefits especially when it comes to security hardening through specific settings and configuration.
  6. Hypervisor Security. There has been great interest in the concept of hypervisor (virtualization kernel) and potential security vulnerabilities that may lead to serious comprises. While so far nothing serious has been uncovered in hypervisors developed by major virtualization vendors, the discussion continues as to whether hypervisors are inherently secure due to their small footprint, or that it is only a matter of time before serious vulnerabilities are discovered and exploited. We will discuss both points of view and provide evidence that supports both theories.
  7. Isolation and Rollbacks. One great benefit of virtualization is the ability to reset a system to a previous state captured in the past. This ability can be very useful in situations where an attack has taken place on a system and the only way to recover is to revert back to a previous system image. This sounds good only to discover that there are risks associated with such scenario including synchronization issues, data loss, and other malicious software embedded in previous images or snapshots.

Prerequisites

General understanding of computer architecture and basic security concepts.

About the Instructor

Mr. Zed Abbadi is an Application Security Manager with the Public Company Accounting Oversight Board (PCAOB). He has over 18 years of experience in software and security engineering. His experience ranges from providing security consulting services to building large-scale software systems. In his current role he is responsible for the security of all software applications that run on PCAOB.s infrastructure.

Zed holds a Bachelor of Science in Computer Science and a Masters degree in Systems Engineering from George Mason University. He is a published author and has presented at various security conferences.


Course T6 – Security Risk Analysis of Enterprise Networks: Techniques and Challenges

Anoop Singhal, NIST
Xinming (Simon) Ou, Kansas State University

Tuesday Afternoon, December 6th, Half Day

Protection of enterprise networks from malicious intrusions is critical to the economy and security of our nation. The objective of this course is to give an overview of the techniques and challenges for security risk analysis of enterise networks. A standard model for security analysis will enable us to answer questions such as "are we more secure than yesterday" or "how does the security of one network configuration compare with another one". In this course, we will present a methodology for security risk analysis that is based on the model of attack graphs and the Common Vulnerability Scoring System (CVSS).

At present, computer networks constitute the core component of information technology infrastructures in areas such as power grids, financial data systems and emergency communication systems. Protection of these networks from malicious intrusions is critical to the economy and security of our nation. To improve the security of these information systems, it is necessary to measure the amount of security provided by different networks configurations. The objective of this course is to give an overview of the techniques and challenges for security risk analysis of computer networks. A standard model for security analysis will enable us to answer questions such as ¡§are we more secure than yesterday¡¨ or ¡§how does the security of one network configuration compare with another one¡¨. Also, having a standard model to measure network security will bring together users, vendors and researchers to evaluate methodologies and products for network security.

An essential type of security risk analysis is to determine the level of compromise possible for important hosts in a network from a given starting location. This is a complex task as it depends on the network topology, security policy in the network as determined by the placement of firewalls, routers and switches and on vulnerabilities in hosts and communication protocols. Traditionally, this type of analysis is performed by a red team of computer security professionals who actively test the network by running exploits that compromise the system. Red team exercises are effective, however they are labor intensive and time consuming. There is a need for alternate approaches that can work with host vulnerability scans.

In this course, we will present a methodology for security risk analysis that is based on the model of attack graphs and the Common Vulnerability Scoring System (CVSS). Attack graphs illustrate the cumulative effect of attack steps, showing how individual steps can potentially enable an attacker to gain privileges deep within the network. CVSS is a risk measurement system that gives the likelihood that a single attack step is successfully executed. In this course we present a methodology to measure the overall system risk by combining the attack graph structure with CVSS. Our technique analyzes all attack paths through a network, providing a probabilistic metric of the overall system risk.

Outline

  1. Basics of Enterprise Network Security.
  2. Threats to Networks.
  3. Common Vulnerability Scoring System (CVSS).
  4. Attack Graphs, Bayesian Networks and Tools for generating Attack Graphs.
  5. Security Risk Analysis of Enterprise Systems using Attack Graphs.
  6. Challenges and Future Directions.
  7. Conclusions.

Prerequisites

The anticipated participants are IT Security Professionals in industry and academia, researchers in computer and network security, graduate students.

About the Instructor

Dr. Anoop Singhal is currently a Senior Computer Scientist in the Computer Security Division at NIST. His research interests are in secure web services and network security, intrusion detection and large scale data mining systems. He has several years of research experience at NIST, George Mason University and AT&T Bell Labs. As a Distinguished Member of Technical Staff at Bell Labs he has led several research projects in the area of Databases and Data Mining Systems, Web Services and Network Management Systems. He is a senior member of IEEE and he has published more than 25 papers in leading conferences and journals. He received his Ph.D. in Computer Science from Ohio State University, Columbus Ohio. He has given several talks and presented papers in conferences such as RSA 2007, IFIP DBSEC 2010, ACM CCS 2010 and ACSAC 2009.

Dr. Xinming Ou is currently an Assistant Professor at Kansas State University. He received his PhD from Princeton University in 2005, where he designed the MulVAL network security analyzer as his PhD dissertation work. He was a post-doctoral research associate at Purdue University's CERIAS center from Sept 2005 to May 2006, and joined Kansas State University in Aug 2006. Dr. Ou has also visited Idaho National Laboratory (INL) for the summers of 2006 and 2007 as a research associate, working with INL scientists on applying logical attack graphs to analyze the security threats facing the nation's critical infrastructures. Dr. Ou's current research activities focus on enterprise network security defense, including security configuration management, intrusion analysis, and real-time situation awareness. He is a recipient of NSF Faculty Early Career Development (CAREER) Award.


CANCELLED Course T7 – CERT Secure Coding Standards and SCALe

Mr. Robert C. Seacord, CERT   Software Engineering Institute

Tuesday, December 6th, Full Day

CERT has created the Source Code Analysis Laboratory (SCALe), which offers conformance testing of software systems to CERT secure coding standards, including the CERT C Secure Coding Standard and the CERT Oracle Secure Coding Standard for Java.

SCALe evaluates client source code using multiple analyzers, including static analysis tools, dynamic analysis tools, and fuzz testing. CERT reports any violations of the secure coding rules to the developer. The developer may repair and resubmit the software for reevaluation. After the developer has addressed these findings and the SCALe team determines that the product version tested conforms to the standard, CERT issues the developer a certificate and lists the system in a registry of conforming systems.

This professional development course will teach you how to securely code systems in C and Java that can successfully pass conformance testing in SCALe.

Outline

  1. Source Code Analysis Laboratory. Conformance Testing Outcomes. SCALe Laboratory Environment. Conformance Testing Process. The Use of Analyzers in Conformance Testing. Conformance Test Results. CERT SCALe Seal. SCALe Accreditation.
  2. The CERT Oracle Secure Coding Standard for Java. Input Validation and Data Sanitization (IDS). Declarations and Initialization (DCL). Expressions (EXP). Numeric Types and Operations (NUM). Object Orientation (OBJ). Methods (MET). Exceptional Behavior (ERR). Input Output (FIO). Serialization (SER). Platform Security (SEC). Runtime Environment (ENV).
  3. The CERT C Secure Coding Standard. Declarations and Initialization (DCL). Expressions (EXP). Integers (INT). Arrays (ARR). Characters and Strings (STR). Memory Management (MEM). Input Output (FIO). Environment (ENV). Signals (SIG).
  4. Developing Conforming Systems. Developer training. TSP Secure. Static analysis in the development process. ISO/IEC C Secure Coding Rules Study Group. False positive/ False negative rates. Legacy code / new code. Annotations.
  5. Summary.

Prerequisites

Course participants must understand programming at a technical level to get good value from the course. Practicing C and Java programmers will derive the greatest benefit but programmers who use other languages such as C++ will also find the course useful.

About the Instructor

Mr. Robert C. Seacord is the author of The CERT C Secure Coding Standard (Addison-Wesley, 2008) and Secure Coding in C and C++ (Addison-Wesley, 2005), providing guidance on secure practices in C and C++ programming. Seacord leads the Secure Coding Initiative at CERT, located in Carnegie Mellon's Software Engineering Institute (SEI) in Pittsburgh, PA. CERT's Secure Coding Initiative develops and promulgates secure coding practices and techniques, such as CERT's Secure Code Analysis Laboratory (SCALe), the first to certify software for conformance with secure coding standards. His research group develops publicly available tools for the analysis and development of secure software. Seacord is an adjunct professor in the Carnegie Mellon University School of Computer Science and in the Information Networking Institute and frequent speaker throughout the world. Seacord is also a technical expert for the ISO/IEC JTC1/SC22/WG14 international standardization working group for the C programming language.


Course T8 – The Bro Network Intrusion Detection System

Seth Hall & Robin Sommer, International Computer Science Institute

Tuesday, December 6th, Full Day

The proposed course will give an introduction to the Bro network intrusion detection system, a flexible open-source software running on commodity hardware. The Bro system provides a powerful means for expressing network security analysis tasks at different semantic levels and is not tied to any particular detection approach. Bro achieves its rich, semantic processing by providing a domain-specific analysis language that makes it fully customizable to a site's security policy. Well grounded in more than 15 years of research, the system has successfully bridged the traditional gap between academia and operations since its inception. Today, it is relied upon operationally by many scientific environments for securing their cyberinfrastructure. Bro's user community includes major universities, research labs, supercomputing centers, and open-science communities. The system has also been used in numerous research studies aimed at understanding the specifics of network traffic, often even independent of security aspects. The presenters are both members of Bro's core development team.

This course will provide attendees with an in-depth understanding of operating Bro installations. We will present an overview of the system's philosophy & architecture, and provide a step-by-step introduction to using the effectively in operational environments. We will in particular focus on major new functionality that we are currently developing for the next public Bro release, scheduled for fall 2011. After the course, attendees will be able to start building their own Bro setups and write tailored site-specific analysis scripts. We will also cover a range of more specific topics, including interfacing Bro with external applications, which will allow attendees to integrate the system into their existing setups.

The course's content will be partially based on past "Bro Hands-On Workshops" that we have held quite successfully at the San Diego Supercomputer Center in 2007 and at UC Berkeley in 2009; as well as on a former ACSAC tutorial held in 2009. These events were attended by network operators from academia, industry and government sites. The course hand-outs will include slides as well as a number of exercises for attendees to practice what they have learned.

Outline

This full-day tutorial will give an overview of the field of usable security with the focus on principles, approaches and research methods of usable security. A large number of real-life examples will be used to illustrate that it is feasible to develop security solutions that are simultaneously secure and usable. With the aim to enable participants to both evaluate and produce high-quality work in usable security, the tutorial is tentatively structured as follows:

  1. Bro Design Overview. System philosophy. Architecture
  2. Installing the Bro NIDS. Compilation & installation. Basic command-line usage
  3. Basics of Using Bro. Typical Bro usage. Basic customization
  4. Scripting Language Overview. Syntax. Data types. Example scripts
  5. Advanced Bro Scripting. State management & persistence. Signatures. Profiling & debugging
  6. Bro Communication. Inter-Bro communication. Interfacing with external applications.
  7. The Time Machine. Interfacing Bro with a packet bulk recorder
  8. The Bro Cluster. Architecture. Operation

Prerequisites

The course is primarily targeted at two groups of attendees: security staff of network environments considering an operational deployment of the Bro NIDS; and academic researchers and students with the need for a flexible network traffic analysis platform. We do not assume any prior knowledge about using Bro, though attendees should be familiar with Unix shell usage and have a comfortable understanding of Internet protocols and tools for examining network traffic (e.g., tcpdump or Wireshark).

About the Instructors

Mr. Seth Hall is a research engineer with the International Computer Science Institute where he acts as a developer and performs community outreach for the Bro-IDS project. Previously, Seth was employed by The Ohio State University's network security group where he built the worlds first operationally used Bro cluster and wrote numerous custom analysis scripts. He also spent a short time with GE building a new platform for their company-wide intrusion detection activities.

Dr. Robin Sommer is a staff researcher at the International Computer Science Institute in Berkeley, and he is also a member of the cyber-security team at the Lawrence Berkeley National Laboratory. His research focus is on network security monitoring in operational high performance settings, and he is one of the core developers of the Bro system. Robin co-chaired the 2010/2011 Symposiums on Recent Advances in Intrusion Detection, and he has served on numerous academic program committees and review panels. He holds a doctoral degree from TU Munich, Germany.


TF1 – Tracer FIRE – A Forensic and Incident Response Exercise – Part 1

Kevin Nauer and Benjamin Anderson, Sandia National Laboratory

Monday, December 5th, Full Day

Tracer FIRE, (Forensic and Incident Response Exercise), is a program developed by Sandia and Los Alamos National Laboratories to educate and train cyber security incident responders (CSIRs) and analysts in critical skill areas, and to improve collaboration and teamwork among staff members. Under this program, several hundred CSIRs from the Department of Energy and other U.S. government agencies have been trained. In Tracer FIRE, attendees will learn about a variety of topics in the areas of incident response, forensic investigation and analysis, file systems, memory layout and malware analysis. Tracer FIRE includes a mixture of lecture, hands-on training, and competitive exercises designed to provide the attendees with the knowledge and practice to apply what they have learned in a real-world situation.

This full-day professional development course is split into two sections. The morning, classroom portion, will consist of both lecture and hands-on training with forensic analysis tools. After this classroom training, attendees will be familiar with the difficulties facing CSIR Teams in performing "cyber triage" of incidents, and many of the methods used to quickly identify important events on the network. In addition, students will receive hands-on training in forensic search, collection and analysis of NT file systems using the EnCase Enterprise forensic tool suite, the leading computer forensic solution in use by government agencies and corporations.

In the afternoon, attendees will be divided into teams and will participate in a competition that will require them to apply what they have learned during the classroom training. During this competition, the teams will solve cyber security challenges involving host forensic analysis while attempting to defend their computer system, and attacking the systems belonging to the other teams. This exercise will allow attendees to develop practice at maintaining network situational awareness and use of forensic tools, and hone their teaming and communication skills.

Note: This course is Part 1 of a two-part course. Attendees are encouraged to enroll in TF1 (Part 1) and TF2 (Part 2). A discounted combination rate is provided for those attending both days. Student scholarships may be available – please see http://www.acsac.org/2011/cfp/students/.

Outline

  1. Rapid Response Cyber Forensics. The need for "cyber triage". Tools and protocols used by CSIR teams to discover events. Methods to prioritize actionable events. Importance of updating defensive systems.
  2. Introduction to EnCase Enterprise. Acquiring a remote forensic image. Difference between logical and physical images. Basic examination of forensic image. General functionality.
  3. File Systems for Incident Responders. Low level details of the NT file system. Associated artifacts of the operating system. Windows registry.

Prerequisites

Attendees will require a basic understanding of computer systems, networks and general cyber security concepts. Workstations and the EnCase Enterprise suite will be provided for the attendees – no personal hardware or software is required.

About the Instructors

Mr. Kevin Nauer is a member of technical staff at Sandia and has over ten years experience in conducting forensic analysis and leading a team of analysts to conduct incident response operations. Kevin has been leading a development effort for the past three years to develop a framework to support collaborative cyber security incident response operations. Kevin holds a B.S. and M.S in computer science, and he has also served as a Captain in the US Army Intelligence and Security Command where he helped form a new organization to support national intelligence operations integrating computer forensic analysis techniques.

Mr. Ben Anderson is a member of technical staff at Sandia and has conducted research in virtualization and SSD Forensics. He holds a master's degree in computer engineering from Iowa State University and previously served in the Marines Corps as a member of their Fleet Antiterrorism Security Team Co.


TF2 – Tracer FIRE – A Forensic and Incident Response Exercise – Part 2

Kevin Nauer and Benjamin Anderson, Sandia National Laboratory

Tuesday, December 6th, Full Day

Tracer FIRE, (Forensic and Incident Response Exercise), is a program developed by Sandia and Los Alamos National Laboratories to educate and train cyber security incident responders (CSIRs) and analysts in critical skill areas, and to improve collaboration and teamwork among staff members. Under this program, several hundred CSIRs from the Department of Energy and other U.S. government agencies have been trained. In Tracer FIRE, attendees will learn about a variety of topics in the areas of incident response, forensic investigation and analysis, file systems, memory layout and malware analysis. Tracer FIRE includes a mixture of lecture, hands-on training, and competitive exercises designed to provide the attendees with the knowledge and practice to apply what they have learned in a real-world situation.

This full-day professional development course is designed for professionals and graduate students studying computer forensics, or as an extension of the Part 1 of the Tracer FIRE course (Day 1 – TF1). The morning, classroom portion, will consist of both lecture and hands-on training in specialized forensic areas. After this training, attendees will be familiar with the layout of Windows memory, and how to acquire memory contents of a live system. In addition, attendees will practice file carving to recover "lost" data, increasing their ability to retrieve evidence; and PDF dissection and analysis to identify and analyze malware utilizing this infection vector, increasing their ability to mitigate and prevent future intrusions.

In the afternoon, attendees will be divided into teams and will participate in a competition that will require them to apply what they have learned during the classroom training. During this competition, the teams will solve cyber security challenges involving memory analysis and malware discover while attempting to defend their computer system, and attacking the systems belonging to the other teams. This exercise will allow attendees to develop practice at maintaining network situational awareness and use of forensic tools, and hone their teaming and communication skills.

Note: This course is Part 2 of a two-part course. Attendees are encouraged to enroll in TF1 (Part 1) and TF2 (Part 2). A discounted combination rate is provided for those attending both days. Student scholarships may be available – please see http://www.acsac.org/2011/cfp/students/.

Outline

  1. Rapid Response Cyber Forensics Review. A review of topics from the first day tutorial. Serves as an introduction to the topic area for those attending only this tutorial, and not both days.
  2. Memory Space: The Final Frontier. Issues with acquisition of a memory image from a live system. Layout of memory in Windows. Examination of memory contents.
  3. Forensic Memory Acquisition and Analysis. Use of Memorize and Audit Viewer. Hands-on memory acquisition. Analysis of memory image.
  4. File Carving and PDF Dissecting. Reasons for file carving. Discovery and reassembly of file fragments. Examination of PDF file format. Malicious PDF analysis.

Prerequisites

Attendees will require a basic understanding of computer systems, networks and general cyber security concepts. Workstations and the EnCase Enterprise suite will be provided for the attendees – no personal hardware or software is required.

About the Instructors

Mr. Kevin Nauer is a member of technical staff at Sandia and has over ten years experience in conducting forensic analysis and leading a team of analysts to conduct incident response operations. Kevin has been leading a development effort for the past three years to develop a framework to support collaborative cyber security incident response operations. Kevin holds a B.S. and M.S in computer science, and he has also served as a Captain in the US Army Intelligence and Security Command where he helped form a new organization to support national intelligence operations integrating computer forensic analysis techniques.

Mr. Ben Anderson is a member of technical staff at Sandia and has conducted research in virtualization and SSD Forensics. He holds a master's degree in computer engineering from Iowa State University and previously served in the Marines Corps as a member of their Fleet Antiterrorism Security Team Co.