With businesses struggling to fill job posts and thousands of computer science graduates out of work, universities could play a critical role in averting an IT skills crisis by ensuring the UK is self-sufficient in IT professionals.
Meanwhile, thousands of experienced and newly qualified IT professionals are unemployed, under-employed or working in jobs unrelated to their jobs.
Figures from the Higher Education Statistics Agency (Hesa) show that computer science graduates are the largest group of unemployed graduates in the UK. Some 14% of recent computer science graduates are unemployed, compared with 13% of graduates in communications, 5% in education, 4% in veterinary science, and almost none in medicine and dentistry.
“In the UK, just learning the basics of programming is not enough because it has become a commoditised industry. There are hundreds of thousands of graduates in India and China who are really good at programming, so a lot of these things can be outsourced,” he said.
“Graduates in the UK cannot just rely on the technical skills of programming. They have to become much more focused on the networking and business skills required to succeed,” said Cheok.
He said all jobs require human skills and computer scientists must use their people networks: “Every job, no matter what industry, is very much human focused – it is people who control entry to a job, control promotion, control opportunities; computers don’t hire people.
“So it is really critical for people to realise that one of the best ways to improve your career is to leverage your network. Your human network is critical, so when students are at university, it’s essential they build up a network and expand it.
“Almost all opportunities that come about are through people.”
Cheok said universities play an important role and need to change: “Universities must adapt to this too, with courses such as business and computing. Pure computer science is okay if you want to be a researcher or academic. Just knowing how to program in Java and C is no good because anyone can.”
Cheok is professor of pervasive computing at City University in London. He is researching multi-sensory human communication via mobile internet.
Researchers at City University London, are in talks with the finance sector about using wearable technology to provide trading executives with real-time data 24 hours a day.
The catering and healthcare industries are also interested in using pervasive computing currently in development.
Adrian David Cheok, professor of pervasive computing at the university, said people are currently fully focused on screens for information and there is a limit to what can be absorbed. By sending messages through touch and smell via the mobile internet rather than just audio-visual data, humans can consume more information.
With developments such as big data technology and 4G there is more information available 24/7, but a limited ability to absorb it.
Cheok and his team have developed a ring that can receive a message over the internet. This ring can be connected to an application that monitors big data. If there are changes in things such as stock prices, a message could be sent to the ring through the sense of touch.
“[A finance firm] is looking to use the ring for real-time data for finance professionals because you can’t be in front of your terminal 24 hours a day. But there are certain stocks and indicators they have to always monitor,” said Cheok. “By having something very personal on your body, like wearable technology, 24 hours a day they can, for example, get information about whether a stock is going up or down.
“The thing is we have access to infinite data, but to effectively interact with that data and in the physical world we need to use all of our sense for communication. Basically right now we’re using all of our concentration on screens so there is a limit to how much we can absorb and we can’t always be looking at a screen, you have to do things with your body.”
He said the research team is also talking to a Michelin star restaurant Mugaritz in San Sebastian, Spain, about supporting its advertising.
“The restaurant can fit only a limited number of people in every night, and they want to expand their customer base. How do they do that? They already have a website with photos, but people can’t understand the experience. We’re working with them to make an app, not only will you see the food, you’ll be able to smell it as well. This virtual sense of presence experience so advertising and marketing can benefit,” he said.
“It’s a good example of where audio visual data isn’t enough, if you want have experience of food, then taste and smell are essential, we need to bring in all of the senses, to communicate through the internet, so this is a real-world example of how this could be used,” added Cheok.
Smell is connected to the limbic system in the brain, it can directly trigger memory
-Adrian David Cheok, City University London
The healthcare industry is also looking at technologies being developed by the team. For example, smells can be automatically triggered in the room of a patient as a reminder to take medication.
“Smell is connected to the limbic system in the brain, it can directly trigger memory. We’re discussing with a group working with dementia patients and the biggest problem is they forget their medication and because smell directly affects memory and emotions it can be used to remind patients to take medication,” he said.
Cheok demonstrated a smell being transmitted over the internet to a mobile phone (pictured, above). This uses a chemical pack attached to a phone and a message will trigger a smell. About 10,000 of these have already been sold in Japan and the City University team expects to bring them to the UK soon.
Other innovations in development include a “hugging pyjama” that can be used by parents to hug children when they are not around. The concept could have applications in the care industry. A person hugs a jacket and it sends a message to the jacket being worn by the recipient who feels the hug.
Cheok began looking at augmented reality about 15 years ago when it was still very early research and he wanted to create augmented reality systems. He received a military grant to work on augmented reality for soldiers to help them to understand their environment in urban combat.
Communication is moving beyond barriers, says Rhodri Marsden
04 JANUARY 2014
Society will also have to work out how it’s going to handle the hyper-connectivity of a multisensory internet
Websites and apps are frequently described by their creators as offering a ‘rich experience’. The beautiful designs, intuitive layouts and compelling interactivity may well be engaging and satisfying to use, but when they’re hailed as being a ‘feast for the senses’, it’s evident that they’re a feast for merely two.
Online entertainment is about sight and sound; everything is mediated through a glass panel and a speaker, leaving us well short of being immersed in an alternative reality.
But with studies having demonstrated that more than half of human communication is non-verbal, scientists have been working on ways of communicating touch, taste and smell via the internet.
“What do you smell?” asks Adrian Cheok, professor of pervasive computing at City University London. The whiff of melon is unmistakable; it emerged from a tiny device clipped to an iPhone and was triggered by Cheok standing on the other side of the room.
In the shorter term, the applications of these devices seem slightly frivolous. “Fortunately, or unfortunately,” he says, “that’s where they’ve decided that the money is.
“But we need to explore the boundaries of how these things can be used, because scientists and inventors can’t think of all the possibilities.”
Our transition to an internet of all the senses is evidently dependent on the breadth of information that can be conveyed from one person to another as a series of zeroes and ones. “You have to find a way of, say, transmitting smell digitally, without using a sachet,” says Cheok. Few of us can conceive of the pace with which technological power is developing. Ray Kurzweil(author, futurist, and a director of engineering at Google) predicts that, by 2025, we’ll have a computer which has the processing power of the human brain. Cheok sees this as a hugely important tipping-point for society.
Society will also have to work out how it’s going to handle the hyper-connectivity of a multisensory internet – bearing in mind that we can already become deeply frustrated by the few kilobytes of information contained within the average overloaded email inbox.
“Our brains haven’t changed to cope with infinite communication,” says Cheok. “We don’t have a mechanism for knowing when there’s too much, in the way that we do when we’ve eaten too much.
“Communication is not just a desire, it’s a basic need – but we’ve gone from being hunter-gatherers in groups of 20, or 30, to being in a world of infinite data. We could literally gorge on communication and be unable to stop.”
Marshall McLuhan, the Canadian philosopher of communication theory, famously used the term “global village” to describe the effect of connected media upon the world’s population.
Cheok believes that new sensory-communication channels will demonstrate how prescient that prediction was.
“For most of human history, we didn’t have privacy,” he says. “Everyone knew who was doing what. And these developments will mean that we become more and more open; almost bringing us back to the way that life used to be in hunter-gatherer times. Except, of course, it’s now global.”
The implications of the work of Cheok and his contemporaries seem to sit midway between exciting and terrifying, but in the shorter term it’s about focusing on relatively mundane objectives, such as emitting multiple odours from a smartphone.
“People will get used to this new mode of communication,” says Cheok, “and develop new languages.
“We don’t yet have a language of smell, or touch. But, combined with emotion and the subconscious, it’ll bring a heightened sense of presence.
“I’ve no idea what that will feel like, but I’ve always believed that human communication goes far beyond the logical.”
This device uses electrodes to convince the brain that it is ‘tasting’ something
Websites and apps are frequently described by their creators as offering a “rich experience”. The beautiful designs, intuitive layouts and compelling interactivity may well be engaging and satisfying to use, but when they’re hailed as being a “feast for the senses”, it’s evident that they’re a feast for merely two.
Online entertainment is about sight and sound; everything is mediated through a glass panel and a speaker, leaving us well short of being immersed in an alternative reality. But with studies having demonstrated that more than half of human communication is non-verbal, scientists have been working on ways of communicating touch, taste and smell via the internet, and many of those experiments have been gathering pace.
“What do you smell?” asks Adrian Cheok, professor of pervasive computing at City University London. The whiff of melon is unmistakable; it emerged from a tiny device clipped to an iPhone and was triggered by Cheok standing on the other side of the room. “Right,” he says. “These devices have been commercialised in Japan – they’re selling 10,000 units a month – and they’re bringing smells into a social interface.” It’s still early days with this technology; the device I’m holding is similar to an inkjet printer in that it contains a melon “smell sachet”, and when it’s empty you have to buy another one. Nor is it a particularly new concept; in 1999, Wired magazine ran a front cover story about a company called Digiscents that had produced a USB “personal scent synthesiser” for your computer called the iSmell. Digiscents folded two years later. But the technology that failed to excite us back then now looks slightly less gimmicky in the context of modern smartphone usage, with its super- connectivity and emoticons galore.
On the surface, Cheok’s projects are fun, almost throwaway. “I’ve worked on hugging pyjamas,” he says. “They consist of a suit you can put on your body to virtually hug someone, remotely. Then we have these small haptic rings; if I squeeze my ring someone else will feel a squeeze on theirs through the internet – like a remote sensation of hand-holding.” He’s also been working on a device with electrodes that excites taste receptors on the tongue, producing an artificial sensation of taste in the brain. Similar work is also under way at the National University of Singapore, where a team of researchers is constructing a “digital lollipop” that fools the tongue into experiencing sweet, salt, sour or bitter tastes.
Adrian Cheok demonstrates one of his creations
In the shorter term, the applications of these devices seem slightly frivolous; Cheok’s rings, for example, are being turned into a product that the music industry plans to sell to fans. “You go to the concert,” he says, “the pop star would send a special message, and if you’re wearing the ring you’d get a squeeze on your finger.” I grimace slightly, and he laughs.
“Fortunately or unfortunately,” he says, “that’s where they’ve decided that the money is – but we need to explore the boundaries of how these things can be used, because scientists and inventors can’t think of all the possibilities. For example, Thomson Reuters has been in touch to ask about using the rings to send tactile information about stock prices or currency movements.”
Our transition to an internet of all the senses is evidently dependent on the breadth of information that can be conveyed from one person to another as a series of zeroes and ones. “You have to find a way of, say, transmitting smell digitally, without using a sachet,” says Cheok.
“So I’m working with a French neuroscientist, Olivier Oullier, on a device which can produce an artificial sensation of smell through magnetic actuation. The olfactory bulb in our nasal cavity that’s responsible for smell can be stimulated by pulsing magnetic fields. So this is about directly exciting the brain’s neural path by bypassing the external sensor – in this case the human body.”
This immediately plunges us into what seems like incredibly futuristic territory, where brains are communicating sensory information directly with other brains across digital networks. But it’s already been demonstrated by the synthetic neurobiology group at MIT (Massachusetts Institute of Technology) that optical fibre can be connected to neurons, and Cheok is excited about where this may lead in the relatively short term. “We will have direct connection to the brain within our lifetime,” he says, “although what level that will be I’m not sure. Physical stimulation of neurons may not produce the effects that we would hope for and predict.”
Few of us can conceive of the pace with which technological power is developing. Ray Kurzweil (author, futurist, and a director of engineering at Google) predicts that by 2025 we’ll have a computer which has the processing power of the human brain, and by 2045 it’ll have the processing power of six billion brains – ie, everyone on the planet. Cheok sees these as hugely important tipping points for society. “If you’re able to download your brain to a computer, there are major philosophical questions that we’ll have to deal with in the next 30 years, such as whether we’re human, or whether we’re computers.”
Society will also have to work out how it’s going to handle the hyper-connectivity of a multisensory internet – bearing in mind that we can already become deeply frustrated by the few kilobytes of information contained within the average overloaded email inbox. Text messages that are not replied to already provoke consternation – what about unreciprocated touches, provocative odours or unwanted tastes?
“Our brains haven’t changed to cope with infinite communication,” says Cheok. “We don’t have a mechanism for knowing when there’s too much, in the way that we do when we’ve eaten too much food. Communication is not just a desire, it’s a basic need – but we’ve gone from being hunter-gatherers in groups of 20 or 30 to being in a world of infinite data. We could literally gorge on communication and be unable to stop. We’ll have to find new norms and new mechanisms, but it’s difficult to predict what they will be.”
Marshall McLuhan, the Canadian philosopher of communication theory, famously used the term “global village” to describe the effect of connected media upon the world’s population; it has become overused, but Cheok believes that new sensory-communication channels will demonstrate how prescient that prediction was. “For most of human history, we didn’t have privacy,” he says. “Everyone knew who was doing what. And these developments will mean that we become more and more open – the end of secrecy, almost bringing us back to the way that life used to be in hunter-gatherer times. Except, of course, it’s now global. A lot more people will know.”
The implications of the work of Cheok and his contemporaries seem to sit midway between exciting and terrifying, but in the shorter term it’s about focusing on relatively mundane objectives, such as emitting multiple odours from a smartphone. “People will get used to this new mode of communication,” says Cheok, “and develop new languages. We don’t yet have a language of smell, or of touch; exactly the same pressure in terms of a touch can have a completely different response in the brain, depending on context. But combined with emotion and the subconscious, it’ll bring a heightened sense of presence. I want us to be able to eat together across the internet. I’ve no idea what that will feel like,” he adds, smiling, “but I’ve always believed that human communication goes far beyond the logical.”
A team made led by City University London’s Mixed Reality Lab and other university academics are finalists in the HackingBullipedia Global Challenge, aimed at discovering the most inventive design and technology to support the world’s largest repository of gastronomic knowledge.
A combined team comprising academics from City University London’s Mixed Reality Lab, University of Aix-Marseille (France) and Sogang University (South Korea) has made the final of this year’s HackingBullipedia Global Challenge aimed at discovering the most inventive design and technology to support the world’s largest repository of gastronomic knowledge.
Led by Professor Adrian Cheok, Professor of Pervasive Computing in the School of Informatics, their competition entry is titled “Digital Olfaction and Gustation: A Novel Input and Output Method for Bullipedia”.
The team proposes novel methods of digital olfaction and gustation as input and output for internet interaction, specifically for creating and experiencing the digital representation of food, cooking and recipes on the Bullipedia. Other team members include Jordan Tewell, Olivier Oullier and Yongsoon Choi.
No stranger to digital olfaction applications in the culinary space, Professor Cheok recently gave a Digital Taste and Smell presentation to the third top chef in the world, Chef Andoni Luiz Aduriz, at Mugaritz restaurant in San Sebastian, Spain.
The HackingBullipedia Global Challenge was created by the renowned world leading culinary expert, Chef Ferran Adria I Acosta.
The jury, comprising some of the best culinary and digital technology experts in the world arrived at a shortlist of four teams after carefully sifting through 30 proposals from three continents drawn from a mix of independent and university teams.
The other teams in the final are fromUniversitat Pompeu Fabra (Barcelona); the Technical University of Catalonia; and an independent (non university) team from Madrid.
On the 27th of November, two representatives from each of the four finalist teams will pitch their proposal and give a demonstration to the competition’s judges after which the winner will be decided.
Professor Cheok is very pleased that City will be in the final of the competition final:
“I am quite delighted that we were able to make the final of this very challenging and prestigious competition. There were entries from various parts of the world covering a broad spectrum of expertise including a multidisciplinary field of scientists, chefs, designers, culinary professionals, data visualisation experts and artists. We are confident that our team has prepared an equally challenging and creative proposal which will be a game-changer in the gastronomic arena.”
Adrian Cheok with his taste-transmitting device. Photos by Jonathan Shkurko
Adrian Cheok, professor of pervasive computing at City University London and director of the Mixed Reality Lab at the National University of Singapore, is on a mission to transform cyberspace into a multi-sensory world. He wants to tear through the audiovisual paradigm of the internet by developing devices able to transmit smells, tastes, and tactile sensations over the web.
Lying on the desk in Cheok’s labis one of his inventions: a device that connects to a smartphone and shoots out a given person’s scent when they send you a message or post on your Facebook wall. Then there’s a plexiglass cubic box you can stick your tongue in to taste internet-delivered flavours. Finally, a small plastic and silicone gadget with a pressure sensor and a moveable peg in the middle. It’s a long-distance-kissing machine: You make out with it, and your tongue and lip movements travel over the internet to your partner’s identical device—and vice versa.
“It’s still a prototype but we’ll be able to tweak it and make it transmit a person’s odour, and create the feeling of human body temperature coming from it,” Cheok says, grinning as he points at the twin make-out machines. Just about the only thing Cheok’s device can’t do is ooze digital saliva.
I caught up with Cheok to find out more about his work toward a “multi-sensory internet.”
The make-out device, plugged into an iPhone
Motherboard: Can you tell us a bit more about what you’re doing here, and what this multi-sensory internet is all about?
There is a problem with the current internet technology. The problem is that, online, everything is audiovisual and behind a screen. Even when you interact with your touchscreen, you’re still touching a piece of glass. It’s like being behind a window all the time. Also, on the internet you can’t use all your senses—touch, smell and taste—like you do in the physical world.
Here we are working on new technologies that will allow people to use all their senses while communicating through the Internet. You’ve already seen the kissing machine, and the device that sends smell-messages to your smartphone. We’ve also created devices to hug people via the web: You squeeze a doll and somebody wearing a particular bodysuit feels your hug on their body.
What about tastes and smells? How complex are the scents you can convey through your devices?
We’re still at an early stage, so right now each device can just spray one simple aroma contained in a cartridge. But our long-term goal is acting directly on the brain to produce more elaborated perceptions.
What do you mean?
We want to transmit smells without using any chemical, so what we’re going to do is use magnetic coils to stimulate the olfactory bulb [part of the brain associated with smell]. At first, our plan was to insert them through the skull, but unfortunately the olfactory part of the brain is at the bottom, and doing deep-brain stimulation is very difficult.
And having that stuff going on in your brain is quite dangerous, I suppose.
Not much—magnetic fields are very safe. Anyway, our present idea is to place the coils at the back of your mouth. There is a bone there called the palatine bone, which is very close to the region of your brain that makes you perceive smells and tastes. In that way we’ll be able to make you feel them just by means of magnetic actuation.
Cheok demonstrates the taste-transmitter
But why should we send smells and tastes to each other in first place?
For example, somebody may want to send you a sweet or a bitter message to tell you how they’re feeling. Smell and taste are strongly linked with emotions and memories, so a certain smell can affect your mood; that’s a totally new way of communicating. Another use is commercial. We are working with the fourth best restaurant in the world, in Spain, to make a device people can use to smell the menu through their phones.
Can you do the same thing also when it comes to tactile sensations? I mean, can you put something in my brain to make me feel hugged?
It is possible, and there are scientists in Japan who are trying to do that. But the problem with that is that, for the brain, the boundary between touch and pain is very thin. So, if you perform such stimulation you may very easily trigger pain.
It looks like you’re particularly interested in cuddling distant people. When I used to live in Rome, I once had a relationship with a girl living in Turin and it sucked because, well, you can’t make out online. Did you start your research because of a similar episode?
Well, I have always been away from my loved ones. I was born in Australia, but I moved to Japan when I was very young, and I have relatives living in Greece and Malaysia. So maybe my motivation has been my desire to feel closer to my family, rather than to a girl. But of course I know that the internet has globalized our personal networks, so more and more people have long-distance relationships. And, even if we have internet communications, the issue of physical presence is very relevant for distant lovers. That’s why we need to change the internet itself.
The scent device in action
So far you have worked on a long-distance-hugging device and a long-distance-kissing machine. We also have gadgets that can transmit a person’s body odour. If I connect the dots, the next step will be a device for long-distance sex.
Actually, I am currently doing some research about that. You see, the internet has produced a lot of lonely people, who only interact with each other online. Therefore, we need to create technologies that bring people physically—and sexually—together again. Then, there’s another aspect of the issue…
What’s that?
As you noticed, if you put all my devices together, what you’re going to have soon are sorts of “multi-sensory robots”. And I think that, within our lifetime, humans will be able to fall in love with robots and, yeah, even have sex with them.
It seems to me all the work you’re doing here may be very attractive for the internet pornography business.
Of course, one of the big industries that could be interested in our prototypes is the internet sex industry. And, frankly speaking, that being a way of bringing happiness, I think there’s nothing wrong with that. Sex is part of people’s lives. In addition, very often the sex industry has helped to spur technology.
But so far I haven’t been contacted by anybody from that sector. Apparently, there’s quite a big gap between people working in porn and academia.
You can touch your screen on your PC or mobile phone and interact with that inanimate object that way but can you smell it? If you can smell it, how about tasting it? It may sound fanciful but Professor Adrian Cheok believes it is not far off and fanciful but near and achievable. He has been working on a device that will allow users to smell the person they are talking to on the phone. He joins Click to demonstrate ChatPerf and the ability to smell and taste our technology.
Researchers believe we will become emotionally attached to robots, even falling in love with them. People already love inanimate objects like cars and smartphones. Is it too far a step to think they will fall deeper for something that interacts back?
“Fantastic!” says Adrian Cheok, of Japan’s Keoi University’s mixed reality lab, when told of the Paro study. Professor Cheok, from Adelaide, is at the forefront of the emerging academic field of Lovotics, or love and robotics.
Cheok believes the increasing complexity of robots means they will have to understand emotion. With social robots that may be with you 24 hours a day, he says it is “very natural” people will want to feel affection for the machine. A care-giver robot will need to understand emotion to do its job, and he says it would be a simple step for the robot to express emotion. “Within a matter of years we’re going to have robots which will effectively be able to detect emotion and display it, and also learn from their environment,” he says.
The rather spooky breakthrough came when artificial intelligence researchers realised they did not need to create artificial life. All they needed to do was mimic life, which makes mirror neurons – the basis of empathy – fire in the brain. “If you have a robot cat or robot human and it looks happy or sad, mirror neurons will be triggered at the subconscious level, and at that level we don’t know if the object is alive or not, we can still feel empathy,” Cheok says. “We can’t really tell the difference if the robot is really feeling the emotion or not and ultimately it doesn’t matter. Even for humans we don’t know whether a person’s happy or sad.” He argues if a robot emulates life, for all intents and purposes it is alive.
Psychologist Amanda Gordon, an adjunct associate professor at the University of Canberra, is sceptical. “It’s not emotional, it’s evoking the emotion in the receiver,” she says. ”That seal isn’t feeling anything. It’s not happy or sad or pleased to see you.”
She says the risk is that people fall for computer programs instead of a real relationship. “Then you’re limiting yourself. You’re not really interacting with another. Real-life relationships are growth-ful, you develop in response to them. They challenge you to do things differently.”
Cheok’s research shows 60 per cent of people could love a robot. “I think people fundamentally have a desire, a need to be loved, or at least cared for,” he says. “I think it’s so strong that we can probably suspend belief to have a loving relationship with a robot.”
Probably the most advanced android in the world is the Geminoid robot clone of its creator Hiroshi Ishiguro, director of the Intelligent Robotics lab at Osaka University. Professor Ishiguro says our bodies are always moving, so he programmed that realistic motion into his creation along with natural facial expressions.
The one thing it does not do is age, which means 49-year-old Ishiguro is constantly confronted with his 41-year-old face. “I’m getting old and the android doesn’t,” he says. ”People are always watching the android and that means the android has my identity.” So he has had plastic surgery – at $10,000, he says it is cheaper than $30,000 to build a new head.
Robots can help kids with autism who do not relate to humans. Ishiguro is working with the Danish government to see how his Telenoid robots can aid the elderly.
Moyle says she has had inquiries from throughout Australia about Paro. A New Zealand study showed dementia victims interacted with a Paro more than a living dog.
“There are a lot of possible negative things [that artificial intelligence and robots could lead to],” Cheok says, “and we should be wary as we move along. We have to make sure we try to adjust. But in general I think the virtual love for the characters in your phone or screen or soon robots is ultimately increasing human happiness, and that’s a good thing for humanity.”
This week I had a chance to visit Dr. Adrian Cheok and his students at the Mixed Reality Lab at Keio University. The research they’re conducting is based around the notion that in the future technology will shift from today’s ‘Information Age’ to an ‘Experience Age’. Dr. Cheok predicts that we will experience the realities of other people, as opposed to just reading about them, listening to them, or watching a video on a glass screen.
Visiting the Mixed Reality Lab was a refreshing experience. I’ve come to associate terms like ‘Augmented Reality’ with things like Sekai Camera, or the fascinating human Pac-man game that his lab created a few years back [1]. But Dr. Cheok points out quite rightfully – and perhaps surprisingly – that one of the earliest examples of AR was Sony’s Walkman, the first device that allowed people to have their own personal sounds with them all the time.
Beyond Sound and Vision
Once we accept the idea that augmented/mixed-reality is not just limited to vision, then it opens up a whole world of possibilities. And these are the possibilities that Dr. Cheok and his students are researching. He explains:
I became interested to see if we could extend augmented reality to other senses. To touch. At first I made a system for human-to-pet communication. We made a jacket for a chicken that allowed a person to convey touch to a chicken remotely. Then we made Huggy Pajama, which could be used to hug a child remotely [2].
Ring-u
While projects like this might strike us as a little strange — or even wacky — it’s important to note that such projects can be far more practical than you might think at first glance. A version of Huggy Pajama called T Jacket has been subsequently developed for for therapeudic purposes. So for example, a child with autism could be comforted remotely with hugs can be sent over the internet by smartphone.
Readers may recall that we previously featured another remarkable haptic communication project from the Mixed Reality Lab called Ring-u. The idea here is that vibrating messages can be sent over the internet, back and forth between a pair of rings, and there is also now a smartphone interface for the ring as well. This project has perhaps far larger potential in the consumer electronics space, and they’re speaking with toy companies and high-end jewelers about possibile future developments.
Taste the Future
But perhaps the biggest challenge for Dr. Cheok and his team is figuring out how to digitize the other two remaining senses:
Smell and taste are the least explored areas because they usually require chemicals. [But] we think they are important because they can directly affect emotion, mood, and memory, even in a subconscious way. But currently its difficult because things are still analog. This is like it was for music before the CD came along.
Amazingly the team has developed a prototype electric taste machine, and I was lucky to be able to try it out during my visit. The device in its current form is a small box with two protruding metal strips, between which you insert your tongue to experience a variety of tastes. For me some were stronger than others, with lemon and spicy being the strongest. It works by using electric current and temperature to communicate taste, and I experienced what felt like a fraction of the intended tastes – but very impressive. I’m told that in the future, this system could even assume a lollipop-like form, which would certainly be very interesting.
Electric taste machine
The lab is also collaborating with Japanese startup ChatPerf, which you may recognize as the company that developed a smell-producing attachment for smartphones. They will also conduct a formal academic study to see to what level smell can affect communication between individuals. But even with ChatPerf, the creation of smells is still analog, using cartridges of liquid to emit odors. Later on Dr. Cheok hopes to similate smells in a non-chemical, digital way, noting that it can be done via magnetic stimulation of the olfactory bulb.
So while experiments like these tend to cause lots of laughs and raised eyebrows sometimes, the work is quite important in expanding how we see technology’s role in our lives.
These are just a few of the great projects that the Mixed Reality Lab is working on, and we hope to tell you about others in the future.
Although we are now in the age of the Internet, our schools are still stuck in the industrial age. As a result, the gap between our schools and reality is widening and could end in total disruption.
There is a clear link between our schools and the factories of the industrial age. In the production line system developed in the 19th and 20th centuries, each individual had to work at the pace of the industrial process, completing repetitive tasks, and was often banned from speaking.
The current school system is eerily similar. Students move along a linear progression of years, semesters and subjects. Every student studies at the same pace, receives grades and takes exams at the same time. If you excel at maths, you are likely to get bored. If you are bad at maths, you are likely to receive bad grades. No matter, everyone must move straight along the production line and repeat the same task over and over again to pass the exam. In class, you are not allowed to talk but must sit passively and let the teacher transfer information at a set speed.
It is not surprising that schools are modelled on the production line. Society, government and businesses needed manpower for the factories and companies of the industrial age. They set up systems that moulded workers into such manpower.
This model is archaic and unsuited for the Internet age, the age of knowledge. Firstly, we do not need factory workers – we need entrepreneurs, inventors, creative business people and designers. It is difficult to compete in global manufacturing. We can compete only in high value-added sectors such as new products, new services and creative industries.
Secondly, the Internet age allows us to discard the linear model. We have the tools and the ability to learn at our own pace. In fact, we can revive some educational practices of the pre-industrial age, such as the apprentice system. Each person keeps working on something until he or she masters it. A maths exam need not be set for the whole class on a specific day. Instead, students can be given continuous online mini tests. When they have mastered one topic, they move on to the next at their own pace.
The main obstacles to implementing such a new model are the inertia and conservatism of the education sector. However, just like every other industry, education is being disrupted and revolutionized by the Internet. Classes and lectures will go online. Students can view them at their own pace and be evaluated interactively.
Students will be much happier because they can study independently and test their limits (this is how video games work, and games are a good model for learning). Homework, on the other hand, will be done in classrooms and lecture halls. Being physically together will be all about solving problems, doing projects, learning through practical tasks, and working in teams with other students and teachers.
Learning and knowledge production will be done simultaneously. This is much more suited to the great technological and social changes of the 21st century. We need to learn more about tacit knowledge rather than explicit knowledge. Explicit knowledge becomes rapidly out of date when technology is changing so quickly. Tacit knowledge helps us to deal with such change. So does learning by doing and working in teams.
KOLLABORATE.IO 93% of all human communication is visual but most online collaboration solutions are text-based. Until now. Kollaborate introduces real-time visual collaboration without the hassle.
PRESENTATION.IO Present realtime to anyone on any device. No downloads, no installations, you simply move through your slides, which will change on all devices connected at the same time.
REAKTIFY A realtime feedback analytics tool. Google Analytics tells you what happened on your site, Kissmetrics tells you who did it, Reaktify tells you why.
Assemblage was founded based on one simple quest; to make it easy to for people and companies to collaborate online with multiple people at the same time. Since that first spark of an idea in 2011, Assemblage products have gone on to help companies and people in over 140 countries around the world to work together real-time on the web.
Adrian Cheok upon appointment as Advisor said: “My interest is in the future of internet where we will have multisensory communication with all the five senses. Assemblage is helping to increase experience communication.”