Steven J. Greenwald
Wenliang (Kevin) Du
University of Idaho
This forum presents a selection of the best, most interesting, and most provocative work from the New Security Paradigms Workshop 2001 (sponsored by ACM). The panelists are the authors of papers selected by the NSPW 2001 General Chairs from those presented at the NSPW 2001 workshop. They will talk about new paradigms in information risk management, defense enabling to promote survival, and multi-party computations within a privacy-preserving context.
This year's workshop was held September 11 13, 2001 in Cloudcroft, New Mexico, USA. The URL for more information is www.nspw.org.
Bob Blakley, Tivoli-IBM, USA
Information security is important in proportion to the business' dependence on information technology. To the degree that business information is exposed to risk, information security technology, however, deals with only a small fraction of the problem of the information risk. In fact, the evidence increasingly suggests that information security technology does NOT reduce information risk very effectively. Our position is that we must reconsider our approach to information security from the ground up if we are to deal effectively with the problem of information risk.
[Bob Blakley, Ellen McDermott, Dan Geer: "Information Security is Information Risk Management"]
Wenliang (Kevin) Du, Syracuse University,USA
The growth of the Internet has triggered tremendous opportunities for cooperative computation, where people are jointly conducting computation tasks based on the private inputs they supply. These computations could occur between mutually untrusted parties, or even between competitors. Today, to conduct such computations, one entity must usually know the inputs from all the participants; however if nobody can be trusted enough to know all the inputs, privacy will become a primary concern. Is it possible to conduct this type of computations without disclosing each participant's private information? In this paper, we have identified, defined and discussed a number of this type of privacy-preserving problems for a spectrum of computation domains. Those problems include privacy-preserving database query, privacy-preserving scientific computations, privacy-preserving intrusion detection, privacy-preserving statistical analysis, privacy-preserving geometric computations, and privacy-preserving data mining.
[Wenliang Du, Mikhail J. Atallah: "Secure Multi-Party Computation Problems and Their Applications: A Review and Open Problems"]
Carol Taylor, University of Idaho, USA
A new approach to network intrusion detection is needed to solve the monitoring problems of high volume network data and the time constraints for Intrusion Detection System (IDS) management. Most current network IDSs have not been specifically designed for high speed traffic or low maintenance. We propose a solution to these problems which we call NATE, Network Analysis of Anomalous Traffic Events. Our approach features minimal network traffic measurement, an anomaly-based detection method, and a limited attack scope. NATE is similar to other lightweight approaches in its simplified design, but our approach, being anomaly based, should be more efficient in both operation and maintenance than other lightweight approaches. We present the method and perform an empirical test using MIT Lincoln Lab data.
[Carol Taylor, Jim Alves-Foss: "NATE, Network Analysis of Anomalous Traffic Events, A Low-cost Approach"]
Franklin Webber, BBN, USA
We claim that a typical distributed software application can be given increased resistance to malicious attack even though the environment in which it runs is untrustworthy. This increased resistance comes from adaptation and an awareness of resource management: the application responds to attacks that deprive it of resources by attempting to recover control of those resources or to find substitutes for them. Our claim does not contradict the traditional approach to security, but complements it. In the traditional approach, core parts of a system are made secure from the bottom up so that applications can be securely layered on top. The traditional approach, however, seems to be too expensive for most purposes and in practice most operating systems and networks offer only flawed security to their applications. Our approach instead centers on helping the application to defend itself. We expect that traditional defenses built into the environment will block some attacks, but that the application also needs to react to attacks that penetrate the traditional defenses. We are testing our claim with Red Team experiments. We have implemented application defenses in a resource-management middleware framework and we are now designing experiments to measure the effectiveness of those defenses under attack.
[Partha Pal, Franklin Webber, Richard Schantz: "Survival by Defense-Enabling"]