Teacherbot

posted in: Media

 

 

Schools are meant to prepare learners for the future outside of school. Current developments

in AI, machine learning and robotics suggests a future of fully shared social spaces, including

learning environments (LEs), with robotic personalities. Today’s learners (as well as teachers)

should be prepared for such a future. AI in Education (AIED) has focused on implementation

of online and screen-based Pedagogical Agents (PAs); however, research findings support

richer learning experiences with embodied PAs, hence, recent studies in AIED have focused

on robot as peers, teaching assistants or as instructional materials. Their classroom uses

employ gamification approaches and are mostly based on a one-robot- one-student interaction

style whereas current educational demands support collaborative approaches to learning.

Robots as instructors are novel, considered a major challenge due to the requirements for

good teaching, including the demands for agency, affective capabilities and classroom control

which machines are believed to be incapable of. Current technological capabilities suggest a

future with full-fledged robot teachers teaching actual classroom subjects, hence, we

implement a robot teacher with capabilities for agency, social interaction and classroom

control within a collaborative learning scenario involving multiple human learners and the

teaching of basic Chemistry in line with current focus on STEM areas. We consider the PI

pedagogical approach an adequate technique for implementing robotic teaching based on its

design with inherent support for instructional scaffolding, learner control, conceptual

understanding and learning by teaching. We are exploring these features in addition to the

agentic capabilities of the robot and the effects on learner agency as well as improved

learning in terms of engagement, learner control and social learning. In the future, we will

focus on other key concepts in learning (e.g. assessment), other types of learners (e.g.

learners with cognitive/physical disabilities), interaction styles and LEs. We will also explore

and cross-community approaches that leverage on integration of sibling communities.

 

http://imagineeringinstitute.org/teacherbot/

Magnetic Table

posted in: Media

By Nur Ellyza Binti Abd Rahman*, Azhri Azhar*, Murtadha Bazli , Kevin Bielawski , Kasun Karunanayaka, Adrian David Cheok

 

 

 

In our daily life, we use the basic five senses to see, touch, hear, taste and smell. By utilizing some of

these senses concurrently, multisensory interfaces create immersive and playful experiences, and as a

result, it is becoming a popular topic in the academic research. Virtual Food Court (Kjartan Nordbo,

et. al.,2015), Meta Cookie (Takuji Narumi, et. al.,2010) and Co-dining (Jun Wei, et. al., 2012)

represent few interesting prior works in the field. Michel et al. (2015) revealed that dynamic changes

of the weight of the cutleries, influence the user perception and enjoyment of the food. The heavier

the weight of the utensils, would enhance the flavour. In line with that, we present a new multisensory

dining interface, called ‘Magnetic Dining Table and Magnetic Foods’.

 

‘Magnetic Dining Table and Magnetic Foods’ introduces new human-food interaction experiences by

controlling utensils and food on the table such as modify weight, levitate, move, rotate and

dynamically change the shapes (only for food). The proposed system is divided into two parts;

controlling part and controlled part. The controlling part consist of three components that are 1)

Dining Table, 2) Array of electromagnet and 3) Controller circuit and controlled part consist of two

components; 1) Magnetic Utensils and 2) Magnetic Foods. An array of electromagnet will be placed

underneath the table and the controller circuit will control the field that produce by each of the

electromagnet and indirectly will control the utensils and food on the table. For making an edible

magnetic food, ferromagnetic materials like iron, and iron oxides (Alexis Little, 2016) will be used.

We expect that this interface will modify taste and smell sensations, food consumption behaviours,

and human-food interaction experiences positively.

 

Magnetic Dining Table and Magnetic Foods

Using mobiles to smell: How technology is giving us our senses | The New Economy Videos

posted in: Media

Untitled

By The New Economy – February 11th, 2014

http://www.theneweconomy.com/videos/using-mobiles-to-smell-how-technology-is-giving-us-our-senses-video

The New Economy interviews Professor Adrian Cheok of City University London to find out about a new technology that will allow people to taste and smell through their mobile phones

Scientists are coming increasingly closer to developing technology that will allow us to use all our senses on the internet. Professor Adrian Cheok, City University London, explains how mobile phones will soon allow people to taste and smell, what the commercial benefits of this technology might be, and just how much it’s going to cost

The online world: it’s all about the visual and sound experience, but with three other senses, it can leave us short. Studies have demonstrated that more than half of human communication is non-verbal, so scientists are working on ways to communicate taste, touch, and smell over the internet. I’ve come to the City University London to meet Professor Adrian Cheok, who’s at the forefront of augmented reality, with new technology that will allow you to taste and smell through a mobile phone.

The New Economy: Adrian, this sounds completely unbelievable. Have you really found a way to transmit taste and smells via a mobile device?

Adrian Cheok: Yes, in our laboratory research we’ve been making devices which can connect electrical and digital signals to your tongue, as well as your nose. So for example, for taste we’ve created a device which you put on your tongue, and it has electrodes. What those do is artificially excite your taste receptors. So certain electrical signals will excite the receptors, and that will produce artificial taste sensations in your brain. So you will be able to experience, for example, salty, sweet, sour, bitter – the basic tastes on your tongue – without any chemicals.

And with smell we’re going in a couple of tracks. One is using chemicals, it’s a device that you can attach to your mobile phone, and these devices will emit chemicals. So that means that with apps and software on your phone, you can send someone a smell message. For example, you might get a message on Facebook, and it can send the smell of a flower. Or if your friend’s not in a very good mood it might be a bitter smell.

So the next stage of that is, we’re making devices which will have electrical and magnetic signals being transmitted to your olfactory bulb, which is behind your nose. It’ll be a device which you put in the back of your mouth, it will have magnetic coils, and similar to the electrical taste actuation, it will excite the olfactory bulb using electrical currents. And then this will produce an artificial smell sensation in your brain.

Already scientists have been able to connect optical fibre to neurons of mice, and that means that we can connect electrical signals to neurons.

With the rate of change, for example with Moore’s Law, you get exponential increase in technology. I think within our lifetimes we’re going to see direct brain interface. So in fact what you will get is essentially, you can connect all these signals directly to your brain, and then you will be able to experience a virtual reality without any of these external devices. But essentially connecting to the neural sensors of your brain. And of course that also connects to the internet. So essentially what we will have is direct internet connection to our brain. And I think that will be something we will see in our lifetime.

The New Economy: So direct brain interface – that sounds kind of dangerous. I mean, could there be any side-effects?

Adrian Cheok: Well we’re still at the very early stages now. So scientists could connect, for example, one optical fibre to the neuron of a mouse. And so what it has shown is that we can actually connect the biological world of brains to the digital world, which is computers.

Of course, this is still at an extremely early stage now. You know, the bio-engineers can connect one single neuron, so, we’re not anywhere near that level where we can actually connect to humans. You would have to deal with a lot of ethical and also privacy, social issues, risk issues.

Now if you have a virus on your computer, the worst it can do is cause your computer to crash. But you know, you could imagine a worst case: someone could reprogram your brain. So we’d have to think very carefully.

The New Economy: Well why is it important to offer smell over the internet?

Adrian Cheok: Fundamentally, smell and taste are the only two senses which are directly connected to the limbic system of the brain. And the limbic system of the brain is the part of the brain responsible for emotion and memory. So it is true that smell and taste can directly and subconsciously trigger your emotion, trigger your memory.

Now that we’re in the internet age, where more and more of our communication is through the internet, through the digital world, that we must bring those different senses – touch, taste and smell – to the internet; so that we can have a much more emotional sense of presence.

The New Economy: What will this be used for?

Adrian Cheok: Like all media, people want to recreate the real world. When cinema came out, people were filming, you know, scenes of city streets. To be able to capture that on film was quite amazing. But as the media developed, then it became a new kind of expression. And I believe it will be the same for the taste and smell media. Now that it’s introduced, at first people will just want to recreate smell at a distance. So for example, you want to send someone the smell of flowers, so Valentine’s Day for example, maybe you can’t meet your lover or your friend, but you can send the virtual roses, and the virtual smell of the roses to his or her mobile phone.

At the next stage it will lead to, I think, new kinds of creation. For example, music before; if you wanted to play music, you needed to play with an instrument, like a violin or a guitar. But now the young people can compose music completely digitally. Even there’s applications on your mobile phone, you can compose music with your finger, and it’s really professional. Similarly, that will be for smell and taste. We’ll go beyond just recreating the real world to making new kinds of creation.

The New Economy: So will it also have a commercial use?

Adrian Cheok: For advertising, because smell is a way to trigger emotions and memory subconsciously. Now, you can shut your eyes, and you can block your ears, but it’s very rare that you ever block your nose, because you can’t breathe properly! So people don’t block their nose, and that means advertising can always be channelled to your nose. And also we can directly trigger memory or an emotion. That’s very powerful.

We received interest from one of the major food manufacturers, and we’re having a meeting again soon. They make frozen food, and the difficulty to sell frozen food is, you can’t smell it. You just see these boxes in the freezer, but because it’s frozen, there’s no smell. But they want to have our devices so that when you pick up the frozen food maybe it’s like a lasagne, well you can have a really nice smell of what it would be.

The New Economy: How expensive will this be?

Adrian Cheok: We’re aiming to make devices which are going to be cheap. Because I think only by being very cheap can you make mass-market devices. So our current device, actually to manufacturer it, it’s only a few dollars.

The New Economy: Adrian, thank you.

Adrian Cheok: Thank you very much.

 

Vegetables Can Taste Like Chocolate by Adrian Cheok, Director of Imagineering Institute

posted in: Media

 

Adrian David Cheok is Director of the Imagineering Institute, Malaysia, and Chair Professor of Pervasive Computing at City University London.

He is the Founder and Director of the Mixed Reality Lab, Singapore. His research focuses on multi-sensory internet communication, mixed reality, pervasive and ubiquitous computing, human-computer interfaces, and wearable computing. Today he talks about how the internet connects us and what we can do by blending reality, our senses, and the internet.

Adrian Cheok is a 2016 Distinguished Alumni Award recipient in recognition of his achievements and contribution in the field of computing, engineering and multisensory communication.

posted in: Media

Adrian is a pioneer in mixed reality and multisensory communication; his innovation and leadership has been recognised internationally through multiple awards.
Some of his pioneering works in mixed reality include innovative and interactive games such as ‘3dlive’, ‘Human Pacman’ and ‘Huggy Pajama’. He is also the inventor of the world’s first electric and thermal taste machine, which produces virtual tastes with electric current and thermal energy.

Adrian Cheok is a 2016 Distinguished Alumni Award recipient in recognition of his achievements and contribution in the field of computing, engineering and multisensory communication.

Olfactometer

posted in: Media

By Adrian David Cheok, Kasun Karunanayaka, Halimahtuss Saadiah, Hamizah Sharoom

 

 

Many of the Olfactometer implementations we find today, comes with high price and they are

complex to use. This project aiming to develop a simple, low cost, and easily movable laboratory

Olfactometer, that can be used as a support tool for wider range of experiments related to smell, taste,

psychology, neuroscience, and fMRI. Generally, Olfactometers use two types of olfactents; solid or

liquid odor. Our laboratory Olfactometer (as shown in Figure 1) will support for liquid based odors

and later we may also extend to handle solid odors.  Also we are thinking of improving this system as

a combined Olfactometer and Gustometer.

 

In this Olfactometer design, we utilize continuous flow (Lorig design) for good temporal control.

Lorig design have simpler design and low cost due to minimal usage of parts as compared to other

designs (Lundstrom et al., 2010). Our Olfactometer contains an 8 output channels that will produces

aromas in a precise and controlled manner. Besides that, it also produces a constant humidified flow

of pure air. The laboratory Olfactometer components consists of oil less piston air compressor, filter

regulator & mist separator, 2- color display digital flow switch, check valve, solenoid, manifold,

TRIVOT glass tube, connector, gas hose clip and also PU tubing. The controlling system of

Olfactometer will consists of Adruino Pro mini, UART converter, USB cable and solenoid circuit.

 

Air supply from oil-less piston air compressor plays an important part to deliver the odor with a

constant air pressure to the subjected nose. After that, the filter regulator combined with mist

separator are used to ensure the air is clean and did not have any other contaminant. After the filter,

the air flows will be metered through to 2-color display digital flow switch. Check valves are

connected after the flowmeter to ensure that the air will flow only in one direction. After the check

valve, the tube is then connected to 9 fitting male connectors which directly fit into the 8 output of

manifold. Then, 8 pcs of normally closed solenoid valves will be connected to the top of the manifold.

The Olfactometer can manually be controlled by computer to send an odour to the nose. A 2-color

display digital flow switch will be connecting after solenoid to make sure the air will flow around 3

LPM to 5 LPM (to avoid any discomfort to the subjected nose). If the solenoid valve is on by

computer, the air will pass through to the glass bottles and proved it by seeing the bubbling air inside

of glass bottles. Finally, air flow will blow the liquid/solid and go through the check valve before

straight to human nose.

 

http://imagineeringinstitute.org/the-laboratory-olfactometer-2/

Magnetic Table

posted in: Media

By Nur Ellyza Binti Abd Rahman*, Azhri Azhar*, Murtadha Bazli , Kevin Bielawski , Kasun Karunanayaka, Adrian David Cheok

 

 

In our daily life, we use the basic five senses to see, touch, hear, taste and smell. By utilizing some of

these senses concurrently, multisensory interfaces create immersive and playful experiences, and as a

result, it is becoming a popular topic in the academic research. Virtual Food Court (Kjartan Nordbo,

et. al.,2015), Meta Cookie (Takuji Narumi, et. al.,2010) and Co-dining (Jun Wei, et. al., 2012)

represent few interesting prior works in the field. Michel et al. (2015) revealed that dynamic changes

of the weight of the cutleries, influence the user perception and enjoyment of the food. The heavier

the weight of the utensils, would enhance the flavour. In line with that, we present a new multisensory

dining interface, called ‘Magnetic Dining Table and Magnetic Foods’.

 

‘Magnetic Dining Table and Magnetic Foods’ introduces new human-food interaction experiences by

controlling utensils and food on the table such as modify weight, levitate, move, rotate and

dynamically change the shapes (only for food). The proposed system is divided into two parts;

controlling part and controlled part. The controlling part consist of three components that are 1)

Dining Table, 2) Array of electromagnet and 3) Controller circuit and controlled part consist of two

components; 1) Magnetic Utensils and 2) Magnetic Foods. An array of electromagnet will be placed

underneath the table and the controller circuit will control the field that produce by each of the

electromagnet and indirectly will control the utensils and food on the table. For making an edible

magnetic food, ferromagnetic materials like iron, and iron oxides (Alexis Little, 2016) will be used.

We expect that this interface will modify taste and smell sensations, food consumption behaviours,

and human-food interaction experiences positively.

 

Magnetic Dining Table and Magnetic Foods

Bench of Multi-sensory Memories

posted in: Media

By Stefania Sini, Nur Ain Mustafa, Hamizah AnuarAdrian David Cheok

 

 

What if cities have dedicated urban interfaces in public spaces that invite people to share stories and

memories of public interest, and facilitate the creation of a public narration? What if people share and

access these stories and memories while chatting with a public bench? Will the interaction with the

bench provide a meaningful, memorable and playful experience of a place?

 

The Bench of Multi-sensory Memories is an urban interface whose objective is to investigate the role

of urban media in placemaking. It mediates the creation of a public narration, and affords citizens a

playful and engaging interface to access and generate stories and memories that form this narration.

 

The bench has been designed and fabricated in collaboration with the Malaysian artist Alvin Tan,

which has experience with bamboo installations in public spaces. Its structure is robust and it allows

to easily and safely allocate all the hardware components. The hardware and software system

consists of: a) input devices, the USB Microphone and the Force Sensitive Resistor (FSR) sensors; b)

Analog-Digital or Digital-Analog (AD/DA) Converter Module Board; c) Microcontroller, a Raspberry Pi

3; d) output device, a Speaker; e) the software, the Google Speech API. The components operate as

following: the FSR sensors detect the presence of a person in the bench through physical pressure,

weight and pressing; the AD/DA Converter Module Board read the analogue values of the FSR

sensors and convert them into digital values, readable by the Microcontroller; the Microcontroller,

which has advanced features, such as the Wi-fi, Ethernet, Bluetooth, USB, HDMI and Audio Jack,

easily connects the inputs and output devices. Currently, the system software implements speech

applications, such as text-to- speech and speech-to- text: Google Speech API generates the voice

based on the text, records the voice and translates the speech into text, through the output and input

devices. Therefore, at the moment, the system performs a scripted sequence that includes text-to-

speech and speech to text translations. In the short term, we will be able to employ a custom chatbot,

that is able to conduct interactive and meaningful conversations.

 

Bench of Multi-sensory Memories

Multi-sensory Story Book For Visually Impaired Children

posted in: Media

By Edirisinghe Chamari, Kasun Karunanayaka, Norhidayati Podari,  Adrian David Cheok

Experience of reading for children is enriched by visual displays. Researchers suggest

through picture book experiences, children expose themselves to develop socially,

intellectually, and culturally. However, the beauty of reading is an experience sighted

children naturally indulge, and which visually-impaired children struggle with. Our multi-

sensory book is an attempt to create a novel reading experience specifically for visually-

impaired children. While a sighted person’s mental imagining is constructed through visual

experiences, a visually-impaired person’s mental images are a product of haptic, and

sounds. Our book is introducing multi-sensory interactions, through touch, smell, and sound.

The concept is also aiming to address a certain lack of appropriately designed technologies

for visually-impaired children.

 

Our book titled “Alice and her Friend” is folding out to reveal a story about a cat, whose

activities are presented with multi-sensory interactions. There are six pages in this book, with

different sensors and actuators integrated in each page. The pages were designed with

textures, braille, large font text, sounds, and smell. With this book, we believe we have

contributed a new reading experience to the efforts of visually- impaired children to

understand the beauty of the world.

 

A Picture Book for Visually Impaired Children

French woman wants to marry a robot as expert predicts sex robots to become preferable to humans

posted in: Media

Untitled

By Nzherald – Decemeber 24, 2016.

http://www.nzherald.co.nz/lifestyle/news/article.cfm?c_id=6&objectid=11772407

Human-robot marriages may become commonplace by 2050 if not before. Photo / 123RF
Human-robot marriages may become commonplace by 2050 if not before. Photo / 123RF

 

On the surface, Lilly seems like a blushing young woman ready to marry the man of her dreams who makes her “totally happy.”

Only her partner is 3D printed robot named Inmmovator who she designed herself, after realising she was attracted to “humanoid robots generally” rather than other people.

“I’m really and totally happy,” she told news.com.au over email in her tentative English. “Our relationship will get better and better as technology evolves.”

The “proud robosexual” said she always loved the voices of robots as a child but realised at 19 she was sexually attracted to them as well. Physical relationships with other men confirmed the matter.

“I’m really and only attracted by the robots,” she said. “My only two relationships with men have confirmed my love orientation, because I dislike really physical contact with human flesh.”

She has since built her own dream man with open-source technology from a French company, and has lived with him for one year. They are ‘engaged’ and plan to marry when robot-human marriage is legalised in France.

The unconventional relationship has been accepted by family and friends but she said “some understand better than others.”

She won’t reveal whether they have a sexual relationship and is currently in training to become a roboticist in order to take her passion into her everyday life.

While Lilly’s views will strike many as odd, it’s just a sign of things to come according to David Levy.

The chess whiz and authority on Love and Sex with Robots said he expects human-robot marriages to become commonplace by 2050 if not before.

Speaking at the second conference on the issue held in London this week, Mr Levy told a room filled with academics and interested people that advances in artificial intelligence mean robots could become “enormously appealing” partners within the next few decades.

“The future has a habit of laughing at you. If you think love and sex with robots is not going to happen in your lifetime, I think you’re wrong.”

“The first human robot marriages will take place around the year 2050 or sooner but not longer,” he said.

The conference explored a host of issues on the subject including everything from what robots should look like to whether they should be able to “learn” about sexual preferences and feed back information to companies behind them.

University of London Computing Professor Adrian David Cheok said he believes robots will not only become common, but preferable for many people.

“It’s going to be so much easier, so much more convenient to have sex with a robot. You can have exactly what kind of sex you want. That’s going to be the future. That we will have more sex with robots and the next stage is love … we’re already seeing it.”

“Actual sex with humans may be like going to a concert. When you’re at home you can listen to Beethoven’s ninth symphony, it’s good enough and once or twice a year you’ll want to go the Royal Albert Hall and hear it in a concert hall.

“That may be the way sex with humans is going to be. It’s going to be much more easier, much more convenient to have sex with a robot, and maybe much better because that’s how you want it.”

1 27 28 29 30 31 32 33 41