Keynote: Computer Security in the Age of Large Language Models
This talk considers what it means to study computer security in this new age of large language models. How will these models be used? What new tasks will they solve? What new vulnerabilities are introduced when language models are deployed in these ways? How can language models help solve existing security problems? Or will they give a stronger advantage to adversaries who can now use them to automate attacks?
About the Speaker:
Nicholas Carlini (https://nicholas.carlini.com/) is a research scientist at Google DeepMind working at the intersection of machine learning and computer security. His most recent line of work studies the security and privacy of neural networks, for which he has received best paper awards at ICML, USENIX, and IEEE S&P. He received his PhD from UC Berkeley in 2018.