A team made led by City University London’s Mixed Reality Lab and other university academics are finalists in the HackingBullipedia Global Challenge, aimed at discovering the most inventive design and technology to support the world’s largest repository of gastronomic knowledge.
A combined team comprising academics from City University London’s Mixed Reality Lab, University of Aix-Marseille (France) and Sogang University (South Korea) has made the final of this year’s HackingBullipedia Global Challenge aimed at discovering the most inventive design and technology to support the world’s largest repository of gastronomic knowledge.
Led by Professor Adrian Cheok, Professor of Pervasive Computing in the School of Informatics, their competition entry is titled “Digital Olfaction and Gustation: A Novel Input and Output Method for Bullipedia”.
The team proposes novel methods of digital olfaction and gustation as input and output for internet interaction, specifically for creating and experiencing the digital representation of food, cooking and recipes on the Bullipedia. Other team members include Jordan Tewell, Olivier Oullier and Yongsoon Choi.
No stranger to digital olfaction applications in the culinary space, Professor Cheok recently gave a Digital Taste and Smell presentation to the third top chef in the world, Chef Andoni Luiz Aduriz, at Mugaritz restaurant in San Sebastian, Spain.
The HackingBullipedia Global Challenge was created by the renowned world leading culinary expert, Chef Ferran Adria I Acosta.
The jury, comprising some of the best culinary and digital technology experts in the world arrived at a shortlist of four teams after carefully sifting through 30 proposals from three continents drawn from a mix of independent and university teams.
The other teams in the final are fromUniversitat Pompeu Fabra (Barcelona); the Technical University of Catalonia; and an independent (non university) team from Madrid.
On the 27th of November, two representatives from each of the four finalist teams will pitch their proposal and give a demonstration to the competition’s judges after which the winner will be decided.
Professor Cheok is very pleased that City will be in the final of the competition final:
“I am quite delighted that we were able to make the final of this very challenging and prestigious competition. There were entries from various parts of the world covering a broad spectrum of expertise including a multidisciplinary field of scientists, chefs, designers, culinary professionals, data visualisation experts and artists. We are confident that our team has prepared an equally challenging and creative proposal which will be a game-changer in the gastronomic arena.”
Adrian Cheok, professor of pervasive computing at City University London and director of the Mixed Reality Lab at the National University of Singapore, is on a mission to transform cyberspace into a multi-sensory world. He wants to tear through the audiovisual paradigm of the internet by developing devices able to transmit smells, tastes, and tactile sensations over the web.
Lying on the desk in Cheok’s labis one of his inventions: a device that connects to a smartphone and shoots out a given person’s scent when they send you a message or post on your Facebook wall. Then there’s a plexiglass cubic box you can stick your tongue in to taste internet-delivered flavours. Finally, a small plastic and silicone gadget with a pressure sensor and a moveable peg in the middle. It’s a long-distance-kissing machine: You make out with it, and your tongue and lip movements travel over the internet to your partner’s identical device—and vice versa.
“It’s still a prototype but we’ll be able to tweak it and make it transmit a person’s odour, and create the feeling of human body temperature coming from it,” Cheok says, grinning as he points at the twin make-out machines. Just about the only thing Cheok’s device can’t do is ooze digital saliva.
I caught up with Cheok to find out more about his work toward a “multi-sensory internet.”
Motherboard: Can you tell us a bit more about what you’re doing here, and what this multi-sensory internet is all about?
There is a problem with the current internet technology. The problem is that, online, everything is audiovisual and behind a screen. Even when you interact with your touchscreen, you’re still touching a piece of glass. It’s like being behind a window all the time. Also, on the internet you can’t use all your senses—touch, smell and taste—like you do in the physical world.
Here we are working on new technologies that will allow people to use all their senses while communicating through the Internet. You’ve already seen the kissing machine, and the device that sends smell-messages to your smartphone. We’ve also created devices to hug people via the web: You squeeze a doll and somebody wearing a particular bodysuit feels your hug on their body.
What about tastes and smells? How complex are the scents you can convey through your devices?
We’re still at an early stage, so right now each device can just spray one simple aroma contained in a cartridge. But our long-term goal is acting directly on the brain to produce more elaborated perceptions.
What do you mean?
We want to transmit smells without using any chemical, so what we’re going to do is use magnetic coils to stimulate the olfactory bulb [part of the brain associated with smell]. At first, our plan was to insert them through the skull, but unfortunately the olfactory part of the brain is at the bottom, and doing deep-brain stimulation is very difficult.
And having that stuff going on in your brain is quite dangerous, I suppose.
Not much—magnetic fields are very safe. Anyway, our present idea is to place the coils at the back of your mouth. There is a bone there called the palatine bone, which is very close to the region of your brain that makes you perceive smells and tastes. In that way we’ll be able to make you feel them just by means of magnetic actuation.
But why should we send smells and tastes to each other in first place?
For example, somebody may want to send you a sweet or a bitter message to tell you how they’re feeling. Smell and taste are strongly linked with emotions and memories, so a certain smell can affect your mood; that’s a totally new way of communicating. Another use is commercial. We are working with the fourth best restaurant in the world, in Spain, to make a device people can use to smell the menu through their phones.
Can you do the same thing also when it comes to tactile sensations? I mean, can you put something in my brain to make me feel hugged?
It is possible, and there are scientists in Japan who are trying to do that. But the problem with that is that, for the brain, the boundary between touch and pain is very thin. So, if you perform such stimulation you may very easily trigger pain.
It looks like you’re particularly interested in cuddling distant people. When I used to live in Rome, I once had a relationship with a girl living in Turin and it sucked because, well, you can’t make out online. Did you start your research because of a similar episode?
Well, I have always been away from my loved ones. I was born in Australia, but I moved to Japan when I was very young, and I have relatives living in Greece and Malaysia. So maybe my motivation has been my desire to feel closer to my family, rather than to a girl. But of course I know that the internet has globalized our personal networks, so more and more people have long-distance relationships. And, even if we have internet communications, the issue of physical presence is very relevant for distant lovers. That’s why we need to change the internet itself.
So far you have worked on a long-distance-hugging device and a long-distance-kissing machine. We also have gadgets that can transmit a person’s body odour. If I connect the dots, the next step will be a device for long-distance sex.
Actually, I am currently doing some research about that. You see, the internet has produced a lot of lonely people, who only interact with each other online. Therefore, we need to create technologies that bring people physically—and sexually—together again. Then, there’s another aspect of the issue…
As you noticed, if you put all my devices together, what you’re going to have soon are sorts of “multi-sensory robots”. And I think that, within our lifetime, humans will be able to fall in love with robots and, yeah, even have sex with them.
It seems to me all the work you’re doing here may be very attractive for the internet pornography business.
Of course, one of the big industries that could be interested in our prototypes is the internet sex industry. And, frankly speaking, that being a way of bringing happiness, I think there’s nothing wrong with that. Sex is part of people’s lives. In addition, very often the sex industry has helped to spur technology.
But so far I haven’t been contacted by anybody from that sector. Apparently, there’s quite a big gap between people working in porn and academia.
You can touch your screen on your PC or mobile phone and interact with that inanimate object that way but can you smell it? If you can smell it, how about tasting it? It may sound fanciful but Professor Adrian Cheok believes it is not far off and fanciful but near and achievable. He has been working on a device that will allow users to smell the person they are talking to on the phone. He joins Click to demonstrate ChatPerf and the ability to smell and taste our technology.
Researchers believe we will become emotionally attached to robots, even falling in love with them. People already love inanimate objects like cars and smartphones. Is it too far a step to think they will fall deeper for something that interacts back?
“Fantastic!” says Adrian Cheok, of Japan’s Keoi University’s mixed reality lab, when told of the Paro study. Professor Cheok, from Adelaide, is at the forefront of the emerging academic field of Lovotics, or love and robotics.
Cheok believes the increasing complexity of robots means they will have to understand emotion. With social robots that may be with you 24 hours a day, he says it is “very natural” people will want to feel affection for the machine. A care-giver robot will need to understand emotion to do its job, and he says it would be a simple step for the robot to express emotion. “Within a matter of years we’re going to have robots which will effectively be able to detect emotion and display it, and also learn from their environment,” he says.
The rather spooky breakthrough came when artificial intelligence researchers realised they did not need to create artificial life. All they needed to do was mimic life, which makes mirror neurons – the basis of empathy – fire in the brain. “If you have a robot cat or robot human and it looks happy or sad, mirror neurons will be triggered at the subconscious level, and at that level we don’t know if the object is alive or not, we can still feel empathy,” Cheok says. “We can’t really tell the difference if the robot is really feeling the emotion or not and ultimately it doesn’t matter. Even for humans we don’t know whether a person’s happy or sad.” He argues if a robot emulates life, for all intents and purposes it is alive.
Psychologist Amanda Gordon, an adjunct associate professor at the University of Canberra, is sceptical. “It’s not emotional, it’s evoking the emotion in the receiver,” she says. ”That seal isn’t feeling anything. It’s not happy or sad or pleased to see you.”
She says the risk is that people fall for computer programs instead of a real relationship. “Then you’re limiting yourself. You’re not really interacting with another. Real-life relationships are growth-ful, you develop in response to them. They challenge you to do things differently.”
Cheok’s research shows 60 per cent of people could love a robot. “I think people fundamentally have a desire, a need to be loved, or at least cared for,” he says. “I think it’s so strong that we can probably suspend belief to have a loving relationship with a robot.”
Probably the most advanced android in the world is the Geminoid robot clone of its creator Hiroshi Ishiguro, director of the Intelligent Robotics lab at Osaka University. Professor Ishiguro says our bodies are always moving, so he programmed that realistic motion into his creation along with natural facial expressions.
The one thing it does not do is age, which means 49-year-old Ishiguro is constantly confronted with his 41-year-old face. “I’m getting old and the android doesn’t,” he says. ”People are always watching the android and that means the android has my identity.” So he has had plastic surgery – at $10,000, he says it is cheaper than $30,000 to build a new head.
Robots can help kids with autism who do not relate to humans. Ishiguro is working with the Danish government to see how his Telenoid robots can aid the elderly.
Moyle says she has had inquiries from throughout Australia about Paro. A New Zealand study showed dementia victims interacted with a Paro more than a living dog.
“There are a lot of possible negative things [that artificial intelligence and robots could lead to],” Cheok says, “and we should be wary as we move along. We have to make sure we try to adjust. But in general I think the virtual love for the characters in your phone or screen or soon robots is ultimately increasing human happiness, and that’s a good thing for humanity.”
This week I had a chance to visit Dr. Adrian Cheok and his students at the Mixed Reality Lab at Keio University. The research they’re conducting is based around the notion that in the future technology will shift from today’s ‘Information Age’ to an ‘Experience Age’. Dr. Cheok predicts that we will experience the realities of other people, as opposed to just reading about them, listening to them, or watching a video on a glass screen.
Visiting the Mixed Reality Lab was a refreshing experience. I’ve come to associate terms like ‘Augmented Reality’ with things like Sekai Camera, or the fascinating human Pac-man game that his lab created a few years back . But Dr. Cheok points out quite rightfully – and perhaps surprisingly – that one of the earliest examples of AR was Sony’s Walkman, the first device that allowed people to have their own personal sounds with them all the time.
Beyond Sound and Vision
Once we accept the idea that augmented/mixed-reality is not just limited to vision, then it opens up a whole world of possibilities. And these are the possibilities that Dr. Cheok and his students are researching. He explains:
I became interested to see if we could extend augmented reality to other senses. To touch. At first I made a system for human-to-pet communication. We made a jacket for a chicken that allowed a person to convey touch to a chicken remotely. Then we made Huggy Pajama, which could be used to hug a child remotely .
While projects like this might strike us as a little strange — or even wacky — it’s important to note that such projects can be far more practical than you might think at first glance. A version of Huggy Pajama called T Jacket has been subsequently developed for for therapeudic purposes. So for example, a child with autism could be comforted remotely with hugs can be sent over the internet by smartphone.
Readers may recall that we previously featured another remarkable haptic communication project from the Mixed Reality Lab called Ring-u. The idea here is that vibrating messages can be sent over the internet, back and forth between a pair of rings, and there is also now a smartphone interface for the ring as well. This project has perhaps far larger potential in the consumer electronics space, and they’re speaking with toy companies and high-end jewelers about possibile future developments.
Taste the Future
But perhaps the biggest challenge for Dr. Cheok and his team is figuring out how to digitize the other two remaining senses:
Smell and taste are the least explored areas because they usually require chemicals. [But] we think they are important because they can directly affect emotion, mood, and memory, even in a subconscious way. But currently its difficult because things are still analog. This is like it was for music before the CD came along.
Amazingly the team has developed a prototype electric taste machine, and I was lucky to be able to try it out during my visit. The device in its current form is a small box with two protruding metal strips, between which you insert your tongue to experience a variety of tastes. For me some were stronger than others, with lemon and spicy being the strongest. It works by using electric current and temperature to communicate taste, and I experienced what felt like a fraction of the intended tastes – but very impressive. I’m told that in the future, this system could even assume a lollipop-like form, which would certainly be very interesting.
Electric taste machine
The lab is also collaborating with Japanese startup ChatPerf, which you may recognize as the company that developed a smell-producing attachment for smartphones. They will also conduct a formal academic study to see to what level smell can affect communication between individuals. But even with ChatPerf, the creation of smells is still analog, using cartridges of liquid to emit odors. Later on Dr. Cheok hopes to similate smells in a non-chemical, digital way, noting that it can be done via magnetic stimulation of the olfactory bulb.
So while experiments like these tend to cause lots of laughs and raised eyebrows sometimes, the work is quite important in expanding how we see technology’s role in our lives.
These are just a few of the great projects that the Mixed Reality Lab is working on, and we hope to tell you about others in the future.
Although we are now in the age of the Internet, our schools are still stuck in the industrial age. As a result, the gap between our schools and reality is widening and could end in total disruption.
There is a clear link between our schools and the factories of the industrial age. In the production line system developed in the 19th and 20th centuries, each individual had to work at the pace of the industrial process, completing repetitive tasks, and was often banned from speaking.
The current school system is eerily similar. Students move along a linear progression of years, semesters and subjects. Every student studies at the same pace, receives grades and takes exams at the same time. If you excel at maths, you are likely to get bored. If you are bad at maths, you are likely to receive bad grades. No matter, everyone must move straight along the production line and repeat the same task over and over again to pass the exam. In class, you are not allowed to talk but must sit passively and let the teacher transfer information at a set speed.
It is not surprising that schools are modelled on the production line. Society, government and businesses needed manpower for the factories and companies of the industrial age. They set up systems that moulded workers into such manpower.
This model is archaic and unsuited for the Internet age, the age of knowledge. Firstly, we do not need factory workers – we need entrepreneurs, inventors, creative business people and designers. It is difficult to compete in global manufacturing. We can compete only in high value-added sectors such as new products, new services and creative industries.
Secondly, the Internet age allows us to discard the linear model. We have the tools and the ability to learn at our own pace. In fact, we can revive some educational practices of the pre-industrial age, such as the apprentice system. Each person keeps working on something until he or she masters it. A maths exam need not be set for the whole class on a specific day. Instead, students can be given continuous online mini tests. When they have mastered one topic, they move on to the next at their own pace.
The main obstacles to implementing such a new model are the inertia and conservatism of the education sector. However, just like every other industry, education is being disrupted and revolutionized by the Internet. Classes and lectures will go online. Students can view them at their own pace and be evaluated interactively.
Students will be much happier because they can study independently and test their limits (this is how video games work, and games are a good model for learning). Homework, on the other hand, will be done in classrooms and lecture halls. Being physically together will be all about solving problems, doing projects, learning through practical tasks, and working in teams with other students and teachers.
Learning and knowledge production will be done simultaneously. This is much more suited to the great technological and social changes of the 21st century. We need to learn more about tacit knowledge rather than explicit knowledge. Explicit knowledge becomes rapidly out of date when technology is changing so quickly. Tacit knowledge helps us to deal with such change. So does learning by doing and working in teams.
KOLLABORATE.IO 93% of all human communication is visual but most online collaboration solutions are text-based. Until now. Kollaborate introduces real-time visual collaboration without the hassle.
PRESENTATION.IO Present realtime to anyone on any device. No downloads, no installations, you simply move through your slides, which will change on all devices connected at the same time.
REAKTIFY A realtime feedback analytics tool. Google Analytics tells you what happened on your site, Kissmetrics tells you who did it, Reaktify tells you why.
Assemblage was founded based on one simple quest; to make it easy to for people and companies to collaborate online with multiple people at the same time. Since that first spark of an idea in 2011, Assemblage products have gone on to help companies and people in over 140 countries around the world to work together real-time on the web.
Adrian Cheok upon appointment as Advisor said: “My interest is in the future of internet where we will have multisensory communication with all the five senses. Assemblage is helping to increase experience communication.”
1) What kinds of technologies realise the ‘reality-virtuality coexistence’ in our daily life?
– Adrian:A process of hyperconnectivity, afforded by such technologies as cloud computing and social media is merging the physical reality and digitaldata.
– Howard: Fundamentally we are talking about video, mobility and cloud. This presumes affordable broadband services with infinite bandwidth.
– Genevieve: It’s more about the experiences supported/enabled by various technologies (e.g. mobile phones, social networks) than technologies themselves. In fact, experiencing virtual worlds is not strictly about technologies- take religious rituals for example.
2) Where is this zeitgeist heading and how will they shape our future?
– Adrian: in a direction that encompasses more of our senses and feelings. Our social networks may extend beyond humans to an emotional/non-verbal communication between humans, their environments, devices and objects.
– Howard: mixed reality technologies will be applied more extensively to the such areas – but not limited to – as virtual training, immersive teaching/learning. These virtual reality-supported learning experiences will increase competence, success and well being in many of our activities.
– Genevieve: The ways for ‘social networking’ will become more diversified,and new modes of digitally enhanced social engagements will continue to emerge. Cultural, social and regulatory frameworks will play an important role in this process.
3) How we can make AR/MR become more humanised and sustainable?
– Adrian: The use of visual, auditory, haptic, olfactory and gustatory senses will enable a new paradigm of more humanised telecommunication. This field has a long way to go, but it would be especially interesting to see how children will grasp these technologies to create value.
– Howard:Are AR/MR technologies about creating alternative realities or enhancing the ‘real world’? – it should be about extending and enhancing our physical world. When used for learning and training, AR/MR can prove to be powerful tools for creativity, innovation, collaboration and productivity.
– Genevieve: We are moving from command-control interactions with technology to possibilities of forming ‘relationships’ with them. Siri for example promises to ‘listen’ and gives us a sense of being taken care of. We might imagine a relationship in which humans and technologies are effectively bound to each other.
Adrian David Cheok, an Australian who is now a professor at Keio University in Tokyo, is one of the journal’s founders. The way he sees it, the internet has already helped bring people closer together. But it’s an experience limited by the fact that the internet currently only interacts with two of our senses: sight and sound. Anyone who has been brought back to childhood by a smell, or been comforted by a hug or touch — in other words, pretty much everyone — knows how powerful such senses can be.
“Actually, physically it’s also been shown that the smell and taste senses are directly connected to the limbic system of our brain. The limbic system of our brain is responsible for emotion and memory. Unlike the visual sense, which basically gets processed by visual cortex and then the frontal lobe, which is higher-order, logical part, we have direct connection between smell and taste and the emotional and memory part of our brain,” Cheok says.
“So much of our lives now is online, but still I think a lot of us will agree it’s so different than meeting someone face-to-face. You have all these different physical communications that we can’t capture now through an audio/visual screen,” he says. “Essentially I’m really interested in [whether we can] merge all of our five senses of human communication with the internet — with the virtual world. That’s what I call ‘mixed reality’.”
Robotics plays a key role in making that a reality, through what is known as telepresence. Basically, it means transmitting actions into a robotic surrogate somewhere else. This can be fairly simple, Cheok says. Cheok and his students have already developed a ring worn on the finger that can deliver a gentle squeeze from a loved one, via a smart phone app. A student of Cheok’s has recently commercially released a vest that can transmit hugs, which is proving useful for calming autistic children. Cheok’s engineers are working on systems to transmit taste, via electrical impulses to the tongue, as well as smell, either via electrical stimulation or the release of chemicals.
The goal further down the track will be the creation of robotic avatars — representations or embodiments of people, though not necessarily made to look like them. To start with, these will be soft, fluffy and not particularly complex. For example, we could transmit our presence into a pillow or teddy bear. But as the endeavours of such scientists as Hiroshi Ishiguro progress, the creation of human-like surrogates will become possible.
“We’re definitely getting there… The rate of change of technology is exponential. What before maybe we thought would take 50 years now takes 5 or 10. I don’t think it’s going to be very far off when we have humanoid robots. They may be expensive at first,” Cheok says.
“I think at that stage, we can have virtual avatars; virtual robots which then, for example, [let you] be in Tokyo or Sydney and give a conference in Los Angeles. You don’t have to fly there. Your robot can be there.”
If there’s one major obstacle in the way of Japan’s projected robo-utopia, it’s the country’s economic situation. Japan has been in a state of economic malaise for more than two decades, and memories of the robot-supported boom years are fading. Neither the companies likely to do the research nor the Japanese government are as flush with cash as they used to be.
One of Japan’s major strengths — its peacenik constitution — has also proved to be a weakness. In the United States, the massive military-industrial complex has marshalled resources to create some truly impressive machinery; drones, for example, have been developed to meet guaranteed demand from government agencies. In Japan, however, there is little co-ordination between different institutions and industries, explains Nishida of Kyoto University.
“People are just interested in working on small parts of the problem, rather than looking at the whole,” Nishida says. While some work on artificial intelligence, others are focussed on the outer physical appearance of robots. With co-ordination and plenty of funding, a fairly complete intelligent android could be built within the next decade or two, he says. Under current conditions, it will probably take longer.
But the consensus is that such robots are coming, and that they will most likely be made first in Japan.
Cheok, of Keio University, says he’s not convinced we’ll produce thinking, feeling, conscious robots until at least the middle of the century, if at all. But he is certain we’re heading towards a loving technological future.
Thanks to their Shinto beliefs, the Japanese have fewer cultural barriers standing in the way of forming close emotional bonds with machines. But as robots become smarter and better looking, he says many more people of other cultures will become ensnared.
“I think the thing is that we already develop bonding with not very intelligent beings. As a kid you might have kept a pet hamster or pet mouse. They’re not actually so intelligent. But I think that a kid can even cry when the hamster dies,” he says.
“I’m not a biologist. I don’t know why we developed empathy but I’m sure there’s an important evolutionary reason why we developed empathy. That empathy doesn’t just stop at human beings. We can develop empathy for small creatures and animals. I don’t think the leap is very far where you can develop empathy for robots.”
World Economic Forum 2012 – A Game Changing Year (by YGL Alumni). Young Global Leader Adrian David Cheok is interviewed about Augmented Reality as a Game Changer in 2012. Other interviewees in the video include Aung San Suu Kyi, Ian Solomon (World Bank), Jimmy Wales (Wikipedia), Salman Khan (Khan Academy),
Extract: With the continuous advancements in computing and media, technology has widened to include multi-sensory experiences in remote interactions. Using electrical, thermal, and magnetic stimulation technologies we are currently experimenting on reproducing smell and taste sensations digitally. As a fundamental aim of this research, we will develop novel user interfaces that empower people’s life with digitized taste and smell communication capabilities. This research will generate important avenues for further research areas using smell and taste based interactions and new media. As the ultimate goal of this project we will develop devices which could actuate taste and smell sensations digitally through the Internet.
We will need to develop new protocols to codify these sensations as well as how to transmit them over the internet. New interfaces for how to send and receive these kinds of sensations will have to be designed. We hope that this will open the doors to new paradigms in human-computer interface design and new fields of research in academia.