
Expert Interview: Sean Peisert on Cybersecurity Research
[00:00:09] Lauren: Hello, I’m Lauren Biron, a science writer at Lawrence Berkeley National Laboratory, and I’m here today with Sean Peisert, who leads computer security research and development. Sean, to start, could you give us a rundown of what kind of cybersecurity research happens at Berkeley Lab?
[00:00:26] Sean Peisert: So one of the areas in which we do cybersecurity research at the lab is in cybersecurity for science itself. Specifically, there is a great deal of science that is conducted using research cyber infrastructure in some way. And what that means is not just a computer of some kind, but all sorts of network connected instruments.
And they can be really big instruments from light sources and radio telescopes down to really small instruments like seismic sensors that you might have planted somewhere. And all of those devices, the ways that they’re interconnected and put together for scientific research leaves them vulnerable to attack of some kind.
A second area is cybersecurity for energy delivery systems, including the power grid. That’s pretty much everything from the substation that you see at the edge of town back down to houses and buildings.
And we also do cybersecurity in support of nuclear non-proliferation and nuclear safeguards, focusing on the equipment that is used in monitoring, in arms control, and treaty verification applications.
[00:01:35] Lauren: So when it comes to high performance computing, Berkeley Lab manages NERSC, the National Energy Research Scientific Computing Center, which provides major super computing power for experimental data or simulations. Why is securing something like NERSC an important focus?
[00:01:52] Sean Peisert: Yeah, so, what I ended up thinking a lot about when I came to the Berkeley Lab is: How can we secure high-performance computing? Because it is a major asset. It is a national asset. And so there are numerous things that could happen if somebody were to attack one of these systems.
So you could have data that could be deleted, altered. You could be simply causing embarrassment. You could have the system shut down, and therefore you’re just simply losing cycles, right?
Another thing that we also cared about relates to identifying what people are running on a computing system. So NERSC does entirely unclassified open science. So, therefore, we would want to know if somebody compromised an account and was cracking passwords on NERSC, or if somebody was running classified software on NERSC, or doing something else illegal – or simply something that was other than what they were allocated time to do.
[00:02:47] Lauren: Thinking about grid security, I know much of your work has looked at the power grid and potential ways to make it safer. Could you talk us through that research and the unique approach that you take?
[00:02:59] Sean Peisert: The approach that we’ve used in most of our projects on cybersecurity for energy delivery systems is what I would call “physics-based.”
So rather than treating every distinct device as an individual instrument and saying, “Is that hacked,” which is kind of the traditional way we approach computers, we say, “We care about what this device is doing.” And, based on that, it doesn’t matter if it’s hacked or not, it’s simply misbehaving. And so it tells us something about what’s going wrong.
Ground truth on the computer system is impossible. On the other hand, we have laws that guide electrical flow: Ohm’s law, Kirchhoff’s law, things like that.
So we actually look at the things like voltage and current and phase angle and other aspects of the power grid to tell us whether a device is behaving improperly or not, as an indicator of whether it has been attacked and compromised. And so we do this physics-based intrusion detection or monitoring that has permeated most of the projects that we’ve worked on.
And so that’s been a success story and I think we are really the first to do that at scale. And so that’s been a pretty big win for the lab.
[00:04:03] Lauren: Another aspect of your work on the grid has been developing things called privacy-preserving techniques, or differential privacy – this way for utilities to share information so that they are better protected. What was the drive behind that research?
[00:04:19] Sean Peisert: So there’s been a historical reluctance to share information between utilities because there are aspects of that information that could provide insight to an adversary how best to target the grid in order to disrupt it.
And there are some additional concerns now that we have widespread smart metering of individual buildings that you could detect individual behavioral patterns within buildings. Like I can tell you what time you open your fridge, because I know the signal that the compressor kicks out when it turns on.
The value of sharing information is that if you find something that’s going on in your grid space, then maybe there’s something going on in your neighbor’s grid space as well, or across the country or whatever it would be.
And so our use of privacy-preserving techniques allows analysis of this data without sharing the raw data. And so the outputs can be seen of these analyses without exposing again, things like grid vulnerabilities or individual behavioral patterns within buildings or houses or something like that.
And the eventual goal is automated response. So that way, for utilities that wish to do so in parts of their grid, could enable AI-based response to prevent instabilities. And the computers work faster than humans do.
[00:05:44] Lauren: Is that different from how utilities have typically approached things?
[00:05:48] Sean Peisert: Automation is not something that power utilities have historically felt very comfortable with. It really is a bunch of people in an operation center, watching things. Most things don’t require real-time response. You usually have minutes to respond. But that is natural faults and things that go wrong, like a power plant is getting close to capacity. You know, you do different things to sort of deal with that. A cyber attack is a very different thing. You know, it can have an instant effect and doesn’t have minutes necessarily to respond to prevent disturbances.
[00:06:19] Lauren: How do you see this research rolling out into the world?
[00:06:22] Sean Peisert: The work is in concert with an industry partner. The plan is for them to commercialize what we’re developing as a service, part of their offerings, going forward. They have quite a few clients already, from utilities to regulators and so on all over the country. And so I’m feeling very bullish on the idea that this work is actually gonna get adopted for distributed privacy-preserving cyber attack detection on power grids.
[00:06:49] Lauren: Do you have an underlying philosophy for how you approach cybersecurity research?
[00:06:55] Sean Peisert: Cybersecurity is meaningless in the abstract. It always has to be cybersecurity of something. When I started working on cybersecurity, I was very much focused on the approach of stopping the bad guy, and thinking about, sort of, the worst things that could happen and figuring out how to defend against them.
What I got really interested in a few years ago was I learned that the VA, the Department of Veterans Affairs, had a program they’re starting called the Million Veterans Program. And it was going to sequence the genomes of a million veterans, and other associated electronic health records, and start figuring out how to address some really chronic issues that the military was facing, which included traumatic brain injury, suicide, and prostate cancer.
I started viewing cybersecurity and certain privacy-preserving technologies as enabling technologies rather than as barriers. What I wanted to do is enable something like the Million Veterans Project, or MVP, data analysis, at the Berkeley Lab, for example. How can we do that at the Berkeley Lab without assuming an unacceptable amount of risk, which was one of the reasons why we didn’t – and still do not – have protected data at the Berkeley Lab.
And so that set me down the path of investigating something called “trusted execution environments.” And we’ve now had a handful of projects on essentially developing our own architecture for trusted execution environments, specifically optimized for scientific data. And we’re now applying that actually to the nuclear environment as well.
So, I’m these days most excited by the kinds of things that my work can enable and make easier or better. And so that to me is more fun than just the next technique to keep the bad guys out.
[00:08:42] Lauren: Finally, if you could have folks walk away knowing one or two things about cybersecurity research at Berkeley Lab, what would they be?
[00:08:50] Sean Peisert: Berkeley Lab does cybersecurity research. Hi. We’re here. And I think we are more forward looking than many organizations out there because of the nature of our organizations. We are able to think 5 to 10 years out, as opposed to we need to field this next year.
And so I think our solutions end up ultimately being more innovative as a result of that. And so I think that the degree of innovation that we have is by the nature of the lab and the fact that we are here and we do cybersecurity, and we do it, in fact, in all of these different sectors. And I think particularly our general view of cybersecurity as this enabling technology to allow things to happen is something that I’m pretty excited about telling people about.
[00:09:32] Lauren: Excellent. Thank you, Sean. To learn more about cybersecurity research at Berkeley Lab, visit lbl.gov.

Distribution channels: Science
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
Submit your press release