PhD-Human-chatbot interaction model based on a trust inference"

Orange Business Services

about the role

Your thesis will focus on the following topic: "Human-chatbot interaction model based on a trust inference".

By 2023, large-scale language models (LLMs) have made a breakthrough with the general public. However, a growing part of the artificial intelligence (AI) research community considers that technical progress can only be achieved through a paradigm shift. The major research question for the next few years is to know how machines can learn as effectively as humans and animals.

During interaction with a conversational bot, the human counterpart is constantly predicting the bot's behaviour. As a result, he or she needs to observe that the bot's behaviour does not deviate too much from his / her prediction, otherwise he or she risks getting disengaged from the relationship.

To gain the trust of the human actor, the bot's behaviour must match what the human expects - as simply as if it were behaving like a human (i.e. with benevolence, sincerity, honesty, etc.).

This thesis provides an opportunity to enhance the concept of trust and its modelling in order to contribute to our responsibility as a telecommunication operator towards our customers, partners, investors and the society.

Our previous work on the mathematical modelling of trust demonstrated the theoretical interest of the approach. However, the model suffers from a number of limitations. Several issues remain unresolved:

  • How can we measure the reliability of the model in relation to human perceptions?
  • How can we introduce a 'proprioceptive' capability into the model, i.e. enable the bot to act on its environment (the human user) by optimising the level of trust perceived by the latter (and vice versa)?

To address these issues, we propose to explore the use of active inference, which suggests that, in the context of human-robot relations (HRC), trust can be considered in terms of virtual control over an artificial agent. Interactive feedback is a necessary condition to extend the perception-action cycle of the trusted agent.

The main expected achievements are :

  • a trust model that would be the best explanation of the human agent in a reliable sensory exchange with a LLM-driven bot;
  • experiments carried out with several commercial products, involving cohorts of human users, should make it possible to establish the reliability of the prediction of the trust model and to publish the results in the appropriate scientific communities – for example in the field of computational neuroscience;
  • finally, the model applied to the bot itself should enable us to build a prototype of a new type of automatic agent which, by design, would demonstrate an acceptable behaviour for humans.

about you

The scientific field of the PhD is in the following disciplines:

  • Applied mathematics, calculation and simulation: Learning and statistical methods: Mathematical statistics: estimation and inference models (131); Learning theory (132).
  • Computer science, algorithms, programming, software and architectures: Theoretical computer science, algorithms and performance: Artificial intelligence, multi-agent systems (212).

We are looking for a candidate who has an engineering degree and/or a Master II in research with a specialisation in the above-mentioned disciplines.

The candidate will have demonstrated, through his/her studies and initial professional experience, a high capacity for abstraction, a strong taste for applied mathematics and computer science, a strong capacity for hard work and any other qualities compatible with a doctoral project.

One or more internships in the fields of machine learning, artificial intelligence, probability and/or applied mathematics are a prerequisite for applying for the thesis.

Curiosity about the linked scientific disciplines (including, but not limited to, sociology, neuroscience, cognitive science and economics) will also be a determining factor in the selection of the candidate.

additional information

The tremendous progress made in machine learning over the last two decades (deep learning, reinforcement learning) has been made possible by a computing capacity that computer scientists never had before. However, we now recognise that intelligent agents should learn as much as possible by observation in order to minimise the number of costly and risky trials required to learn a particular task.

In the context of this thesis, the PhD student will have the opportunity to explore a breakthrough approach that consists of relying on the most recent knowledge in behavioural sciences and neurosciences to build new models of multi-agent communication in order to contribute to the current paradigmatic shift in the field of machine learning. As such, the thesis work will be part of a broader scientific approach illustrated, for example, by the position paper "A Path Towards Autonomous Machine Intelligence" published by Yann Le Cun in 2022.

The PhD student will have the opportunity to work with innovation teams working on operational customer relationship issues within Orange. This will make it easier for him/her to collect data and set up experiments to assess the mathematical models that will be developed.

department

Orange Innovation brings together the research and innovation activities and expertise of the Group's entities and countries. We work every day to ensure that Orange is recognized as an innovative operator by its customers and we create value for the Group and the Brand in each of our projects. With 720 researchers, thousands of marketers, developers, designers and data analysts, it is the expertise of our 6,000 employees that fuels this ambition every day.

Orange Innovation anticipates technological breakthroughs and supports the Group's countries and entities in making the best technological choices to meet the needs of our consumer and business customers.

Within the Innovation division, you will be integrated into the Customer Relations and Business Information Systems direction. You will report to the Strategy department of this direction, which holds a research project dealing with the issue of digital trust. You will be involved in a research ecosystem, working alongside engineers who help to put the concepts studied into practice.

contract

Thesis

Read Full Description
Confirmed 13 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles