• Notera att ansökningsdagen för den här annonsen kan ha passerat. Läs annonsen noggrant innan du går vidare med din ansökan.

Background
The application of Artificial Intelligence (AI) models has become ever more prevalent across various domains, and the accuracy of such models has improved significantly in the recent years.

Some systems are recommending us movies and songs, others are automating fundamental business processes or saving lives by detecting tumors. In the near future, machines relying on AI will drive our cars, fly our aircraft, and manage our care. But for that to take place, we need to ensure that those systems are robust enough against any attempts to hack them. In fact, many Machine Learning (ML) – including Deep Learning (DL)- systems have proved vulnerable to adversarial attacks, during both training and usage.

For example, autonomous cars have been shown to fail to recognize the stop sign just because there are some stickers strategically placed on it. Facial recognition systems have been shown to break down when people wear specially designed clothes such as patterned t-shirts. There are also reports of the safety and ADAS systems of ordinary cars suddenly breaking at certain locations for no apparent reason. It is, therefore, crucial to address these problems and create robust models that can be trusted in real scenarios.

Objectives
In the real world, AI models can encounter both incidental adversity, such as when data becomes corrupted, and intentional adversity, such as when hackers actively sabotage them. Both can mislead a model into delivering incorrect predictions or results. Our objective is to carry out research on Adversarial Robustness, that is to make AI models impervious to irregularities and attacks, by rooting out weaknesses, anticipating new strategies, and designing robust models that perform as well in the wild as they do in a sandbox.

As a PhD student within the area of Machine learning you will work in close collaboration with senior researchers and external partners with relevant problems and research questions. This may involve but is not limited to: development or improvement of algorithms, methods or tools to handle specific machine learning vulnerabilities. Your main focus will initially be on understanding these vulnerabilities, a deep dive into the causal structure of the deep networks and also to investigate the utilization of Generative Adversarial Networks (GANs) to increase robustness to data perturbation.

The goals of the research will be re-evaluated regularly and decided jointly by the supervisors, the PhD student and the research group, in order to explore relevant areas within adversarial machine learning.  A successful project is expected to lead to multiple academic publications, presented at prestigious conferences and workshops, that push forward the boundaries of research in adversarial robustness.

The selected candidate will be employed by RISE AB as an industrial PhD for the duration of the study. This position is partially funded by DataLeash project, part of Digital Futures research center (https://www.digitalfutures.kth.se/research/collaborative-projects/learning-and-sharing-under-privacy-constraints-dataleash/).

 Terms

- Recruiting manager: Marco Forzati, PhD
- Main supervisor: Sepideh Pashami, PhD
- Industry supervisor: Fatemeh Rahimian, PhD
- Division, department: Digital Systems division, Computer Science department
- Location: RISE Computer Science, Kista, Stockholm
- Application deadline: April 24th, 2022
- Type of Position: Industrial PhD student
- Compensations: Fully funded PhD project
- Starting date: As soon as possible, not later than August 15th, 2022.

Candidate profile:
In order to meet the general entry requirements, the applicant must have completed a second-cycle degree, and have completed courses equivalent to at least 240 higher education credits, of which 60 credits must be in the second cycle, or have otherwise acquired equivalent knowledge in Sweden or elsewhere. In addition, the applicant is required to have English language skills equivalent of Common European Framework of Reference for Languages Level B2.

We will base our selection on the following components:

- Background in Computer and Systems sciences and related fields
- Good knowledge in machine learning and deep learning, demonstrated by relevant courses and/or master thesis
- Having publication(s) in the area would be an additional asset
- A strong record of computer programming
- Experience in python, pytorch and/or tensorflow is desirable
- Demonstrated ability to work both independently and as a team member

Welcome with your application!
Send in your application as soon as possible, by April 24th, 2022 at the latest. Applications will be reviewed on a rolling basis.

Applications should include:

- A cover letter, explaining why you think you are a good fit for this position.
- Your CV with your education, professional experience, and specific skills.
- Academic transcript.
- A written report you authored or co-authored for a university level course.

 

Detta är en jobbannons med titeln "PhD position - Increasing Robustness in the Presence of Adversarial Attacks" hos företaget RISE Research Institutes of Sweden AB och publicerades på webbjobb.io den 4 april 2022 klockan 09:53.

Hur du söker jobbet

webbjobb-logo-white webbjobb-logo-grey webbjobb-logo-black