Tyrone Burke, May 31, 2019

Carleton researchers leading the way in cybersecurity and public safety research

When technology is connected to the internet, it is vulnerable to the threat of cyber attacks.

And the world is about to get a whole lot more connected. Already, there are more connected devices than there are people on Earth, and connectivity is only accelerating. By 2025, the number of connected devices will be more than triple what it is today – as many as 75 billion.

There are many researchers at Carleton working on a variety cybersecurity issues, such as with autonomous systems and connected cars. Here we focus on cybersecurity researchers who are working on ways to mitigate the threat of cyber attacks and those who are developing our understanding of the nature of those threats, and how they can be deterred.

Jason Jaskolka: Designing security in an insecure world

In order to gauge the risk associated with cyber attacks, you need to understand the severity of their potential repercussions.

“If I calculate there are 10 vulnerabilities in a system given one design, but only one vulnerability in a second, it doesn’t necessarily mean that the second design is better than the first,” says Jason Jaskolka , Assistant Professor in the Department of Systems and Computer Engineering.

“You need to take into account their potential impact. With a critical system like a nuclear reactor, the ten vulnerabilities in the first design might only allow lights to blink on and off on a control panel, but the one vulnerability in the second system could cause a complete meltdown. You need to take those impacts into consideration as part of your analysis in order to make meaningful insights. What do vulnerabilities actually mean in terms of having a quote-unquote secure system.”

No system is ever 100% secure, and Jaskolka evaluates and assures the security of the critical systems needed for a complex society to function in the digital age.

“I focus on the idea of security by design. How do you design and engineer a system that can operate in a zero-trust environment from its inception — assuming that everybody is out to get you, how do we make these dependable trustworthy systems that will stand up to external threats?”

Jaskolka works with organizations like ports, e-health providers, and manufacturers to ensure system security, and has a contract with the Critical Resilience Institute at University of Illinois Urbana-Champaign, a U.S. Department of Homeland Security Center of Excellence.

He models multi-agent systems to identify vulnerabilities, and provide actionable information that can make the systems more secure.

“If there are potentially undesirable interactions between components of a system, you can reengineer the system to avoid those interactions altogether, or if they can’t be avoided, you can put in mitigations or monitors to watch how components are behaving to make sure that nothing fishy is going on.”

Alex Wilner: Policy in the age of cyber attacks

With so much of the infrastructure that’s critical to our national security having migrated online, the havoc that cyber attacks could wreak is without precedent.

A malicious actor with access to the right systems could turn off the lights, seal dams shut until they burst, or turn off the flow of drinking water to entire cities.

How do we prevent these quasi-apocalyptic scenarios? There’s no clear answer.

Alex Wilner is exploring whether deterrence theory even applies to cyber attacks.

“There’s just far more activity in cyberspace than there is in physical space,” says the Assistant Professor of International Affairs in the Norman Paterson School of International Affairs.

“When we’re thinking about deterrence at the state level and below, the question is what are we trying to deter — is it cyber espionage? Cyber infiltration? Cyber theft? Or cyber attacks that lead to kinetic effect. There are all types of theoretical questions you need to ask, but then how do we link all of that to actual policies and strategies for effective deterrence?”

It might not even be possible, and that owes largely to the anonymity of cyberspace. Terrorists, rogue states or ransom-seeking mobsters could all be on the other side of the keyboard. And they could be anywhere.

“The second challenge is attribution,” says Wilner, who was awarded an Insight Development Grant from the Social Sciences and Humanities Research Council (SSHRC) to study state and non-state cyber deterrence in Canada.

“How do you know who’s done what in cyber space?  If a company or a firm or a community group is attacked in cyberspace, how do they trace it, and what do they do about it?”

All of that’s a real impediment to deterrence.

“If you don’t know who attacked you, then you don’t know how to respond appropriately.”

Sonia Chiasson: Passwords fit for humans

“Almost all aspects of cybersecurity relate to humans at some point,” says Sonia Chiasson, Canada Research Chair in Human Oriented Cyber Security.

“End-users rely on systems to be secure in order to go about their daily lives. They engage with security protocols — through passwords, security warnings, or figuring out if something is a phishing attack or malware. At each point, humans need to provide input, make decisions, assess risks — sometimes with flawed or only partial data, or with overwhelming amounts of data.”

Chiasson’s research explores strengths and limitations that humans have when it comes to cybersecurity — seeking to design systems that remain secure, even when users are tired, distracted or busy.

“A lot of security systems are not designed with users in mind. Designers focus on making a system technically sound, but forget to consider that these have to be used by humans.”

Chiasson’s lab is interdisciplinary, bringing together students from diverse disciplinary backgrounds that have included computer science, psychology, journalism, gerontology, graphic design and numerous other fields. The different perspectives they bring helps ensure that research always takes in to account how humans actually interact with designs.

Take passwords, for example. We all know that long passwords with various types of characters are the most secure, but they’re hard to remember, and people often end up choosing simple passwords, or re-using the same password again and again – leaving all of their accounts exposed if any one account is compromised.

“Just telling users that they need to use strong passwords accomplishes nothing because it doesn’t address the actual problem — the human brain is not meant to remember a whole bunch of strong, random passwords,” says Chiasson.

“Finding ways to authenticate users without using regular text passwords, or finding ways for users to more easily manage their many passwords, would improve end-user security considerably.  There have been some advancements in this area, but all of the alternatives so far still have drawbacks of their own.”

One alternative is graphical passwords, which use clicks on parts of images or grids, in place of text. For children, it may be easier to remember a sequence of images, such as toys, than it is to recall text – particularly if they’re not yet fully literate.

“All of these security systems that we design are meant for use by humans at some point — otherwise, what’s the point?” says Chiasson, who also shares her work with the public as the deputy scientific director of the Smart Cybersecurity Network (SERENE RISC), a knowledge mobilization network that publishes research summaries and stages activities like workshops and stakeholder engagement events.

“We’re not going to build a better human, so we should focus on building systems that take into account all the variables that comes with humans. A system might be secure in theory, but if users can’t reliably use it in practice, it fails.”

Paul Van Oorschot: Clarifying digital certificate trustworthiness

E-commerce has become a trillion-dollar industry, but websites like Amazon.com couldn’t be successful if they weren’t able to protect financial information like credit card numbers.

They are able to do that because of a ubiquitous – but largely invisible – process called asymmetric cryptography.

It is a type of encryption that makes a public key available to anyone.  This key is a large, random and unpredictable number, and when a user sends sensitive information to a website, an algorithm processes that number. This encrypts the information, making it indecipherable to anyone who intercepts it.

It’s only possible to decode the indecipherable message using a private key that is associated only with the user that is sending the information.  This second key is also a large, random and unpredictable number, and when it is processed by the algorithm, the indecipherable gibberish is transformed back into the original information.

Both the public and private keys are generated by a third party, called a certificate authority, which issues digital certificates that authenticate both the website and the user.

Users don’t need to understand that any of this is happening to engage in the process — asymmetric cryptography authenticates us every time we log in to our smartphones, email and social media.

Ensuring that asymmetric cryptography is secure in the face of evolving cyber threats is an ongoing process, but some of the early Public Key Infrastructure (PKI) products and toolkits that today’s processes evolved from were developed here in Ottawa.

Working with Ottawa cybersecurity firm Entrust in the mid-1990s, Carleton Professor of Computer Science Paul Van Oorschot contributed to the development of the world’s first generic PKI.  Along with work by U.S.-based Verisign and Lotus Notes, this helped lay the foundation for the further development of certificate-based infrastructure like the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) cryptographic protocols that are widely used today.

Still, Van Oorschot, who holds the Canada Research Chair in Authentication and Computer Security, believes there is room for improvement.

“There is too little control over and oversight of this certificate authority-based system,” he says.

“By the current design, browsers essentially end up trusting both trustworthy and fraudulent certificates, with end-users given insufficient information to know the difference. Research currently underway aims to reduce the exposures that result from the inability to distinguish trustworthy from fraudulent certificates.”

Ashraf Matrawy: A different internet for a different age

“The internet wasn’t designed for what we’re using it for,” says Ashraf Matrawy, Associate Professor in Carleton’s School of Information Technology.

“And we’re moving into an era where we’ll be using it in a totally different way.”

Where the internet once enabled individual computers to connect and share information, the adoption of cloud computing and smart phones have given the network enormous centralized computing power that can be harnessed by devices that fit comfortably in the palm of your hand.

The launch of 5G networks that enable even faster data transmission, and the adoption of software-defined systems that replace on-site computer hardware infrastructure with cloud-based software and servers only build on the changes that have already taken place.

“We need to look at how we are managing the entire network in a way that’s different from how we’ve managed it so far,” says Matrawy.

“Software-defined systems enable us to make changes faster and in more dynamic ways.”

Businesses and governments can ask service providers to dedicate bandwidth and servers to operate software-defined systems in the cloud, but these systems share that cloud infrastructure with other organizations, and for organizations with critical safety functions, ensuring the security of data stored in the cloud is of paramount concern.

“We’re looking at ways of protecting network slices against denial of service attacks,” Matrawy says.

“It’s a complex problem because you need to look at security but also need to keep in mind that there are performance metrics that you want to achieve. If I give you a security solution that hurts video performance or is going to be inconvenient for the user, it will not be successful. The way to tackle this problem is to look at it as an optimization problem. We want to optimize security while maintaining an acceptable level end-to-end delay.”


Share: Twitter, Facebook

Office of the Vice-President (Research and International)
1125 Colonel By Drive
Ottawa, ON, K1S 5B6, Canada
View Map

vpri@carleton.ca
Phone: 613-520-7838