Hans von Ditmarch has been appointed Professor of Artificial Intelligence at the Open University Faculty of Science (OU). He was based on the ideas of Virginia Dignum: AI should be transparent. So, the loops can be checked for bias, for example. A conversation.
Van Ditmarch (1959) served as chair of a course committee at the Open University (OU) from 1989 to 1994, after which he was transferred to the University of Groningen. There were regular contacts with the OU from Groningen. He completed a pilot phase in business management in 1995. From 1996 to 2000 he worked with OU on the BOK (Extensive Educational Innovation Knowledge Technology) program at the University of Groningen, and in 2009 he co-authored the OU Course Logica in Act. He received his PhD from the University of Croningen in 2000, and later began working as an assistant professor at the University of Otago (New Zealand) and the University of Aberdeen (Scotland). He was a senior researcher at the Universidad de Sevilla (Spain).
Since 2012, he has been a senior researcher at the Center National de Research Scientific (CNRS) in France, where the French NWO says he is stationed at the Loria Research Institute in Nancy, thus collaborating with the University de Lorraine. This French level will end on June 1st.
Within the ai group of OU, von Ditmarsh points out, that Mortiz von Otterlo Who will teach about the ethical aspects of AI. A responsible AI course will be introduced at OU’s Master AI, based on the title book The value of Virginia, Which is affiliated with Umeå University in Sweden and TU Delft. I am associated with Virginia because I am an AI @ Umeå colleague. ‘Says Van Ditmarsh.
Dignum explained to the European Commission why ‘responsible AI’ (responsible use of AI) is needed and how this can be achieved. In His contribution Describes the risks of AI: discrimination and bias, discrimination, devaluation of human skills, lack of control, loss of self-determination and loss of human responsibility. On the other hand, there are benefits in the areas of health care, climate change mitigation, communications, education and employment.
Is ‘responsible AI’ possible? ‘Responsible behavior is always possible,’ responds Van Ditmarsh. ‘You learn it from your parents, family, friends and school. At the University: In a course such as Responsible AI, students are given a hand in using technology responsibly and developing technology responsibly. I can’t imagine anything else with ‘responsible eye’: teaching students to carefully quote already published works, not to make double submissions at various conferences, and to support fellow students and other researchers. , And so on. It is not clear how you should behave in an educational environment and as someone who has contributed to the community for a long time.
The question is whether you need fast computers or ‘office PCs’ to create responsible instructions. Von Ditmark replied: ‘I do not think responsible AI requires fast computers. But it depends on the application. I remember a discussion where facial recognition was banned regardless of criminals or terrorists. Doing harder than saying. Facial recognition for limited target groups does not require a supercomputer, but such an additional protocol layer seems like a data science problem to me. It may require faster computers. On the other hand, this can be done on any computer while adapting the instructions and giving the user extra decision moments. ‘
Having a new employee on the AI team at OU is not so much facial recognition, but so-called eye surveillance (and its application to cognitive problems): Frook Hermans. He will also contribute to the new Master AI.
“I used my student Jinsheng’s pass”
The brand new professor has personal experience to tell about facial recognition. ‘It has been clear to me for some time that a lot is being invested in AI in China. I used to get there. ‘Last he was in Guangzhou at Sun-Yat-sen University – a guest Mingui Ma Further Yongme Liu – To access the university campus, a scanned student or staff card or facial recognition was sufficient to open the entrance gates.
‘I used my student Jinsheng’s pass (who was in Amsterdam at the time). I am not in their display file. So facial recognition didn’t work for me, but I had that pass, so I came inside. What I don’t know is whether facial recognition can determine if I own the pass, but it was never an eyebrow raised by security guards at the campus entrance. ‘
What applications does Van Ditmark like to see created with AI? Communication protocols. Rumors are common in this domain in so-called protocols, especially extensions of such protocols with user information (except for extensions with network information – I have done a lot of recent work – D.M..). It’s about spreading information on networks. One problem with potential compatibility is the complexity of the calculations. Not the type you can get past with a super computer, but only through the best algorithms. ‘
A rumor protocol is a process or process for communication between computers based on the mode of transmission of infections. Some distributed systems use such a protocol to ensure that data is distributed to all members of a group.
Lots of focus
According to Van Ditmark, there is a lot of interest among AI students. ‘I was the Head Guest in 2018 Indian Institute of Technology Mandi In India. Located in the Himalayas, it is very secluded and you can walk wonderfully. Very motivated students, I was asked to judge about third year plans that went from electronic skateboards to automated medicine boxes, where everyone wants to do something with ‘deep learning’ – I don’t know. Nothing like soup-up skateboards. It is a joy and a privilege. It remains to be seen how much interest there will be in OU. Anyway, I’ll start on June 1st. ‘