Nikola Tesla Chatbot is Launched!!

posted in: Media | 0

Nikola Tesla Chatbot is Launched!!

Have FUN with Nikola Tesla bot!!! An artificial intelligence chatbot that uses formal English to chat with you. Everyone can try it for free 24 hours a day 7 days a week! You can also subscribe to the chatbot to get the latest scientific news every morning at 9am (GMT+8). Click the link https://www.facebook.com/nikolateslabot/ on Facebook and press ‘Like’. Then, message and chat with it on https://www.facebook.com/messages/t/414236662260204 now!!! This Chatbot was developed at the Imagineering Institute, Johor, Malaysia (www.imagineeringinstitute.org).

Design For Behavioural Change

posted in: Media | 0

By Predrag K Nikolic, Adrian David Cheok

Design in its different appearances such as for objects, services, environments etc. has potential to influence human behaviour and could create desirable as well as undesirable change. Design has long history in its intentions to act upon positive changes in human perception and lifestyle. Hence, Designing for Behavioural Change can be perceived through early understanding of behavior where person’s behaviour is reflection of his or her own personality, or other ‘internal’ factors and the physical and social environment.

Design for Behaviour Change as an approach is already accepted in several key areas such as ecology, safety, health, and well-being as well as widely adopted in social design. This project explores potentials of interactive media technology, public spaces and interaction media art and design as potential drivers for behaviour changes and social innovations. For this purpose, we developed interactive installations InnerBody and Before and Beyond that aims to provoke behaviour change and design aesthetic and emotional users’ experience, by allowing them to “escape the limitations of existing structures of meaning and expectation within a given practice”. Throughout this projects, we are proposing usage of Interactive Media Art & Design as part of design method capable of transforming public space into the environment for user behavioural change and lead to sustainable design choice for future development of living environments.

Design for Behaviour Change

 

Meeting Points

posted in: Media | 0

By Predrag K. Nikolic, Adrian David Cheok, Sasa Arsovski, Ruhiyati Idayu, Murtadha Bazli

 

 

Project Meeting Points is conceptualize as set of philosophical discussions between two robots . In its first appearance, conceptual pillars for the discussions are philosophical stand points of Aristotle and Nietzsche considering various topics. We used several criteria to select them for our first Meeting Points; importance of the historical periods they were belonging as leading thinkers of their time, improbability to compare their opinions about virtues and human characters, significant influence on revolutionary Renaissance and Dadaistic art movements. As such, It is historical and epical discussion between Aristotel’s Ethical Robot (Magnanimous) and Nietzche’s Overman Robot (Übermensch).

The Interactive Installation Meeting Points does not have any pretensions to be classified as artwork but rather “anti-art” as tends to criticize contemporary aesthetic, cultural and social changes as result of mutual interaction between people and technology.

Central characters of this interactive socio-critical drama are Ethic Robot and Overman Robot. The first one is feed with knowledge collected from some of Aristotle’s main publications such as Nicomachean Ethics, Poetics, Politics, Metaphysics and second one from Nietzsche’s Thus Spoke Zarathustra, The Antichrist, Beyond Good and Evil , The Birth of Tragedy and Ecce Homo.

We are trapped in circles of information, without facts only with interpretations, as Ethical Robot and Overman Robot are trapped on the roundabout of meanings, symbols, and metaphors they are mixing but do not understand, without facts only machine interpretations. As such tragedy in the installation Meeting Points: Übermensch and Magnanimous is all about how to destroy and rebuild our knowledge and technology addicted society till something good come up.

Key technical novelty presented in Interactive Installation Meeting Points is the combination of chatbot technologies and Recurrent Neural Network (RNN) models that will enable reinforcement learning in order to create artificial conversational agents who will achieve human level performance. The fact, that things can communicate with each other and with the humans enables unsupervised learning and reinforcement learning and knowledge multiplying opportunities.

The Neural Conversational Agent technologies allows us to transform everyday “things” into “smart objects” that can understand and react to their environment. A step further in defining the architectural principles of “smart objects” are cloud speech recognition and speech synthesis technologies that allow increasing the interactivity and raise the level of interaction between people and “smart objects”. Motivated by the technological era in Interactive Installation Meeting Points, we will use two Neural Conversational Agents. In order to create philosophical discussions between two robots, we will use Recurrent Neural Network (RNN) models [5]. RNN are a character-level language models.

We will train RNN with a Nietzsche and Aristotle chosen texts and RNN will model the probability distribution of the next character in the sequence given a sequence of previous characters. Hence, this will allow us to generate new text one character at a time as shown on (Figure 1). An example RNN with 4-dimensional input and output layers, and a hidden layer of 3 units (neurons) from [5] We will use standard Softmax classifier [6] and RNN will be rained with mini-batch Stochastic Gradient Descent [7]. Applying the chatbot technology using as conversation base created Neural Network Nietzsche and Aristotle models we will create a Neural Conversation Nietzsche cyber clone and Neural Conversation Aristotle cyber clone (Figure 2).

Those avatars will be deployed to the two separated internet access points. Using a Raspberry pi devices, we will connect robots (smart objects) with Neural Conversation clones access points (robot brains) and enable philosophical discussions between two robots using a Neural chatbots, speech recognition and speech synthesis technologies.

By applying this concept, things converted to the “smart objects” will obtain a distinctive personality, intelligence, and decision-making ability. Key novelties in the Installation Meeting Points: Übermensch and Magnanimous are:

    • Humanless creative process conducted by Artificial Conversational Agents.
    • Using Cyber Clones as creative and artistic medium.
    • Art of AISense or Machine-Context Art.
    • Robot-Robot Interactions as new interaction phenomena and Human Third-Party Neo Technological Experience.
    • Unsupervised and reinforcement learning of conversation agents.

 

Meeting Points: Übermensch and Magnanimous

Touching technology in the flash

posted in: Media | 0
main-logo-animated

By GOVTECH SINGAPORE – 15 March 2017

https://www.tech.gov.sg/TechNews/Innovation/2017/03/Touching-technology-in-the-flash

 

140317_emtecharvr_main-min

Dr Anders Ynnerman moves his hands over the images on the touchscreen, slicing, dicing and rotating, revealing layer upon layer of skin, muscle and bone.

The audience watches in awe.

140317_emtecharvr_anders1-min
Dr Anders Ynnerman, director of Sweden’s Norrköping Visualisation Centre, is a scientific visualisation expert.

They’re looking at a full-body scan of a traffic accident victim, and they can see every injury in larger-than-life detail, including the cause of death: a broken neck, caused by a blow to the head.

Dr Ynnerman, a scientific data visualisation expert and the director of the Norrköping Visualisation Centre in Sweden, was speaking at a session on virtual and augmented reality (AR/VR) at EmTech Asia.

The conference, organised by the MIT Technology Review to explore global emerging technologies, was held in Singapore from 14-15 February 2017.

These full-body scans or ‘virtual autopsies’, he said, are treasure troves of medical data; he takes great care to present them with the utmost respect for the persons who died under tragic circumstances.

Touched by Data

The datasets are generated through computerised tomography (CT) scans, which produce around 25,000 slices of data that together form a full virtual replica of the human patient.

Dr Ynnerman uses mathematics and computer graphics to combine these slices into a huge block of data that can be visualised in three dimensions, and manipulated using touch interfaces.

140317_emtecharvr_anders2-min
Dr Ynnerman dissecting the appeal of touching datasets in the form of 3D graphics.

The technology is clearly a huge boon for medical schools and hospitals.

But he quickly realised its potential for communicating science to the general public. His team has gone on to scan all manner of museum artefacts, including Egyptian mummies, fossils and a wide range of animal specimens.

They have placed touchtable devices in museums around the world — including the Science Centre Singapore —  for visitors to interact and play with the data.

“We’ve gone from very basic mathematical principles all the way out into the museum gallery. I see children exploring scientific data. When they interact with it they’re getting interested in the content, but they’re also getting interested in the technology.”

Dr YnnermanìI added: ” That’s the best reward you can get as a professor. Seeing kids playing with your stuff is much more important than getting citations and papers!”

Kissed by innovation

Also speaking at the session was Dr Adrian David Cheok, director of the Imagineering Institute in Malaysia and Chair Professor of Pervasive Computing at the City University of London.

(Dr Cheok was formerly a professor at the National University of Singapore, where he founded the Mixed Reality Lab; it has since moved to London.)

140317_emtecharvr_adrian2-min
Dr Adrian Cheok, Director of the Imagineering Institute in Malaysia.

Because non-verbal interactions make up a large part of how humans communicate, talking to someone over the internet still pales in comparison to meeting him or her in person, said Dr Cheok.

Thus, his goal is to develop tools that let us perceive the world through the internet using all of our five senses including touch, taste and smell.

“In the future, we’ll move from the age of information, where we are today, into the age of experience,” he said.

“You can share any experience through the internetóyou can feel, taste and smell what it’s like to be anywhere in the world.”

His group has developed a range of devices aimed at letting you do just that.

For those who crave touch, there is the Huggy Pajama, a wearable device that lets parents and children exchange virtual hugs.

More recently, the group introduced the Kissenger, which does exactly what you think it does — the silicon lip-like device connects to your smartphone and lets you kiss someone over the internet.

Smells like Tech spirit

Besides touch, the senses of taste and smell are also powerfully evocative.

“Taste and smell are directly intermingled with the limbic system of the brain, which is responsible for emotion and memory,” said Dr Cheok.

“For example, smell can trigger a memory of your grandmother, or trigger an emotion — it can make you feel happy or sad if the same smell has done that in the past.”

Engineering taste and smell, however, is no trivial task.

140317_emtecharvr_adrian1-min
An audience member getting up-close with one of Dr Cheok’s devices for a multi-sensory experience over the Internet.

Dr Cheok’s team has developed smartphone devices that let users send their friends smells over the internet; these involve the use of an atomiser and scent cartridges. But such devices have drawbacks — only one smell can be sent per cartridge, and cartridges have to be replaced once they run out of scent.

Thus, instead of resorting to chemicals, Dr Cheok is working on ways to directly stimulate the tastebuds or olfactory (smell) receptors.

By placing an electrode on your tongue, for example, he can deliver electrical signals that make you taste something sour; yet another device generates a sweet taste by stimulating the appropriate receptors on the tongue with heat.

Similarly, the team has also tried to electrically stimulate the olfactory receptors inside the nasal cavity to recreate smells.

Wearing this device, however, is still a little uncomfortable.

Although such devices may not be available on the mass market just yet, Dr Cheok believes that it is only a matter of time before they find their way into our homes.

“People really want to experience all of the five senses — they want to be able to have dinner with their grandmother even if she’s on the other side of the world.”

Inventan prototipo para recrear un beso a larga distancia

posted in: Media | 0

Untitled

By CUBADEBATE – 12 marzo 2017

Inventan prototipo para recrear un beso a larga distancia

Kissenger. Photo taken from ERIZOS.

Kissenger. Photo taken from ERIZOS.

Computer expert Adrian David Cheok has created the Kissenger prototype, an invention that will allow long-distance kissing, according to the BBC .

According to Cheok, who heads the Imagineering Institute in Nusajaya Johor , Malaysia, while the initial idea was a device to connect families, “the greatest interest comes from couples” living separately.

“The initial prototype was born in 2003 and after several tests, in 2015, they reached the current design, consisting of a phone casing that connects to the audio jack of the iPhone, iPod or iPad,” he explained.

The creator said that the application will only be available for devices that have the iOS operating system and although the device is in its prototype phase, it is expected that by the end of the year will hit the market.

Kissenger comes from the combination of ‘Kiss’ (kiss) and ‘ssenger’ (short for messenger).

For affective communication to take place, the two participants must have the device and download the application.

The phone is inserted into the holder which contains a silicone area with high precision force sensors capable of measuring the force exerted by the lips during the kiss.

Then, through the application, the device sends this data in real time to the recipient’s device.

(With information from Prensa Latina)

Gadget te permitirá besar a tus seres queridos pese a distancia

posted in: Media | 0

3eb2a6b

By UnoTv – 09/03/2017

http://www.unotv.com/noticias/portal/negocios/detalle/gadget-permitira-besar-tus-seres-queridos-pese-distancia-023298/

Kissenger es el nombre del gadget que permitirá besar a seres queridos aunque se encuentren en partes distintas del mundo. El nombre es la combinación de la palaba “kiss” (beso) y “-ssenger” (abreviatura de Messenger, que quiere decir mensajero).

Las dos personas que deseen “intercambiar” besos, deben contar con el dispositivo y descargar la aplicación. El teléfono debe ser introducido en el gadget, el cual tiene silicona con sensores de fuerza de alta precisión, los cuales miden la fuerza que utilizamos en los labios durante un beso.

Esos sensores mandan los “datos” a la app del celular, el cual los transmite gracias a Internet, en tiempo real al otro celular.

Adrian David Cheok, creador del Kissenger, le dijo a la BBC que el dispositivo está en su fase de prototipo pero se espera que a finales de 2017 salga al mercado. La aplicación solo estará disponible para dispositivos con el sistema operativo iOS

Según reporta la publicación, el precio estaría cerca de los 100 dólares.

El gadget tiene silicona con sensores de fuerza de alta precisión. Foto: Imagineering Institute

 

Estará disponible solo para sistema operativo iOS. Foto: Imagineering Institute

¿Te animarías a besar así? Qué es el Kissenger y cómo funciona

posted in: Media | 0

 unnamed

By T13 – Miércoles 08, Marzo 2017

http://www.t13.cl/noticia/tendencias/tecnologia/te-animarias-besar-asi-es-kissenger-y-como-funciona

 

El Kissenger es un gadget que se conecta al teléfono inteligente y simula besos que pueden ser enviados a cualquier parte del mundo. Se espera que salga al mercado a fines de esta año con un costo de US$100.

Crédito: BBC Mundo

¿Podrá el Kissenger acercar a las personas a la distancia?

 

“Besos por celular”, dice el tema musical de Spaghetti del Rock de la banda argentina Divididos.

Y todo parece indicar que los desarrollos tecnológicos quieren llevar al envío de un beso por el teléfono a un terreno más avanzado: el de sentirlo.

El Kissenger es un gadget que resolvería este problema que atraviesan las parejas que mantienen relaciones a larga distancia, a familiares que viven en diferentes países y, por qué no, conectar a fans con sus ídolos en cualquier parte del mundo.

“Cuando era niño tenía a mis abuelos a una cuadra de mi casa, pero no todos corren con la misma suerte y eso me motivó a pensar en un dispositivo para conectar a las familias”, describió Adrian David Cheok, profesor de computación y creador del Kissenger.

De todos modos, Cheok le contó a BBC Mundo desde Malasia que “el mayor interés proviene de las parejas” que viven separadas.

El nombre Kissenger surge de la combinación de “Kiss” (beso, en inglés) y “ssenger” (abreviatura de messenger: mensajero).

El Kissengger promete que los besos que se envían por teléfono se puedan también sentir.

¿Cómo funciona?

Para que la comunicación afectiva se concrete, los dos participantes deben tener el Kissenger y descargar la aplicación.

El teléfono se introduce al artilugio que contiene un área de silicona con sensores de fuerza de alta precisión.

Estos sensores tienen la capacidad de medir la fuerza que ejercen los labios durante el beso.

Entonces, el dispositivo envía estos datos a la app del teléfono que a su vez los transmite por Internet en tiempo real al aparato del destinatario del beso.

El Kissinger también cuenta con sensores miniatura que reproducen los datos sobre la fuerza que ejercen los labios de quien envía el beso.

Y de esta manera, el Kissinger crea una sensación de beso realista.

El Kissenger cuenta con sensores que captan la fuerza de los besos y mandan los datos a la app para que sean transmitidos.

Apariencias

El Kissenger no siempre tuvo este tamaño y forma.

“El dispositivo inicial nació en 2003. Era un cabeza con labios y la verdad tenía un aspecto espeluznante“, le dijo Cheok a BBC Mundo.

Después de varias pruebas, en 2015 llegaron al diseño actual del Kissenger: una una carcasa de teléfono que se conecta a la toma de audio del iPhone, iPod o iPad, y cuya app solo estará disponible para los dispositivos que cuenten con el sistema operativo iOS.

Según Cheok, que dirige el Imagineering Institute en Nusajaya Johor, Malasia, el dispositivo está en su fase de prototipo pero espera que a fin de año salga al mercado.

¿Su precio? cerca de US$100, “o tal vez menos”, señaló su creador quien aseguró que recibe al menos 30 pedidos diarios de todo el mundo para conseguir el dispositivo.

Habrá que esperar entonces para experimentarlo. Pero, ¿te animarías a besar así?

¿Te animarías a besar así? Qué es el Kissenger y cómo funciona

posted in: Media | 0

mundo_1024x576

By BBC Mundo – 8 marzo 2017

http://www.bbc.com/mundo/noticias-39207456?post_id=627927107_10154351016672108

Una mujer besa el dispositivo
¿Podrá el Kissenger acercar a las personas a la distancia?

 

“Besos por celular”, dice el tema musical de Spaghetti del Rock de la banda argentina Divididos.

Y todo parece indicar que los desarrollos tecnológicos quieren llevar al envío de un beso por el teléfono a un terreno más avanzado: el de sentirlo.

El Kissenger es un gadget que resolvería este problema que atraviesan las parejas que mantienen relaciones a larga distancia, a familiares que viven en diferentes países y, por qué no, conectar a fans con sus ídolos en cualquier parte del mundo.

“Cuando era niño tenía a mis abuelos a una cuadra de mi casa, pero no todos corren con la misma suerte y eso me motivó a pensar en un dispositivo para conectar a las familias”, describió Adrian David Cheok, profesor de computación y creador del Kissenger.

De todos modos, Cheok le contó a BBC Mundo desde Malasia que “el mayor interés proviene de las parejas” que viven separadas.

El nombre Kissenger surge de la combinación de “Kiss” (beso, en inglés) y “ssenger” (abreviatura de messenger: mensajero).

KissengerDerechos de autor de la imagenIMAGINEERING INSTITUTE
Image captionEl Kissengger promete que los besos que se envían por teléfono se puedan también sentir.

¿Cómo funciona?

Para que la comunicación afectiva se concrete, los dos participantes deben tener el Kissenger y descargar la aplicación.

El teléfono se introduce al artilugio que contiene un área de silicona con sensores de fuerza de alta precisión.

Estos sensores tienen la capacidad de medir la fuerza que ejercen los labios durante el beso.

Entonces, el dispositivo envía estos datos a la app del teléfono que a su vez los transmite por Internet en tiempo real al aparato del destinatario del beso.

El Kissinger también cuenta con sensores miniatura que reproducen los datos sobre la fuerza que ejercen los labios de quien envía el beso.

Y de esta manera, el Kissinger crea una sensación de beso realista.

KissengerDerechos de autor de la imagenIMAGINEERING INSTITUTE
Image captionEl Kissenger cuenta con sensores que captan la fuerza de los besos y mandan los datos a la app para que sean transmitidos.

Apariencias

El Kissenger no siempre tuvo este tamaño y forma.

“El dispositivo inicial nació en 2003. Era un cabeza con labios y la verdad tenía un aspecto espeluznante“, le dijo Cheok a BBC Mundo.

Después de varias pruebas, en 2015 llegaron al diseño actual del Kissenger: una una carcasa de teléfono que se conecta a la toma de audio del iPhone, iPod o iPad, y cuya app solo estará disponible para los dispositivos que cuenten con el sistema operativo iOS.

Según Cheok, que dirige el Imagineering Institute en Nusajaya Johor, Malasia, el dispositivo está en su fase de prototipo pero espera que a fin de año salga al mercado.

¿Su precio? cerca de US$100, “o tal vez menos”, señaló su creador quien aseguró que recibe al menos 30 pedidos diarios de todo el mundo para conseguir el dispositivo.

Habrá que esperar entonces para experimentarlo. Pero, ¿te animarías a besar así?

Adrian David Cheok gives keynote speech at EmTech Asia 2017

posted in: Media | 0

Untitled

By Nanotechnology Now – February 22nd, 2017

http://www.nanotech-now.com/news.cgi?story_id=54344

 

Space 4.0: A New Era for Space Exploration panel (L-R): Daniel Hastings, CEO and Director, Singapore MIT Alliance for Research and Technology (SMART) & Former Chief Scientist, US Air Force; Dava Newman, Apollo Program Professor Chair, MIT; David Oh, Project Systems Engineer and Former Lead Flight Director, Curiosity Mars Rover, NASA Jet Propulsion Lab; Matthew Bold, Principle Researcher, Lockheed Martin Space Systems Company Advanced Technology Center; Kay Soon Low, Professor/Director of Satellite Technology And Research (STAR) Centre, National University of Singapore and Rohit Jha, Engineer and CEO, Transcelestial

“EmTech Asia is always a great event. We meet amazing men and women from around the world and we talk about technology that is going to change the future. There is work in bio-medical areas, in artificial intelligence, computer vision, virtual reality. It also gives many people a chance to get together and talk about new things they might be able to collaborate on, might be able to discover and, most importantly, how they can contribute to positive things for all of humanity. And we mean that sincerely, that’s why EmTech Asia is so important and that’s why Singapore is proud to host it.” said Steve Leonard (pictured above), Founding CEO of SGInnovate and Disruptive Innovation Partner of EmTech Asia.

One of the key themes was space exploration, featuring speakers from NASA and MIT such as Dava Newman, Apollo Program Professor Chair, MIT and Former Deputy Director of NASA; and David Oh, Project Systems Engineer and Former Lead Flight Director, Curiosity Mars Rover, NASA Jet Propulsion Lab. Both speakers were also engaged in a conversational panel hosted by the ArtScience Museum (ASM) in collaboration with EmTech Asia. The panel was held in conjunction with the NASA exhibition at the ASM, and was attended by over 130 students, teachers and media representatives.

The MIT Hacking Medicine Robotics Singapore 2017, was held the weekend leading up to EmTech Asia 2017 where the winners took to the stage to discuss their hackathon experiences and the potential for robotics to provide long-term solutions in elderly care and the overarching healthcare industry in Singapore. Held from 10 to 12 February at SGInnovate, the hackathon aimed to address unmet needs in elderly care and medicine and how robotics can play a role in aiding an ageing society. The winning team, Botler, created a patient-friendly autonomous transport for social robotics in eldercare.

This year’s conference featured a session on materials science with Jackie Ying, Executive Director, Institute of Bioengineering and Nanotechnology, A*STAR. Her presentation, Nanostructured Materials for Energy and Biomedical Applications, described the synthesis of metallic, metal oxide, semiconducting and organic nanoparticles and nanocomposites of controlled size, morphology and architecture while discussing their unique properties. The cybersecurity session was led by Walter O’Brien, CEO, Scorpion Computer Services and Executive Producer of hit TV series Scorpion, who spoke about how countries can better protect themselves against cyber security threats.

According to Ron Cellini, Analog Garage/Emerging Business Group at Analog Devices and Cybersecurity Partner of the event, “The main take away from EmTech Asia is not just the ideas presented but the enthusiasm behind them. It is great to see the speakers go up the stage and feel the passion for what they are doing. What’s different at EmTech Asia compared to other conferences is the quality. The quality of the presentations, the quality of the folks you meet. You are not going to come here just to hear presentations that you’ve heard before. You’re going to hear things that are new and that challenge you. The pace, the interactivity with some of the talks, the ability of questioning that continually. This conference really encourages you to participate. I definitely met the right people here. I’ve got a whole stack of things I need to do when I leave this conference and for me that’s the best metric for when I go to conferences.”

EmTech Asia 2017 also featured a session on a Brave New (Bio-Engineered) World, which featured Le Cong, Postdoctoral Fellow, Broad Institute of MIT and Harvard, who introduced advances on genome editing tools using CRISPR system, and highlighted how genomics analysis could be integrated to transform our ability to understand and treat complex diseases such as cancer. Other sessions include The Story and The Prototype, by Mike North, Host of Prototype This!, on the Discovery Channel. Mike shared his rapid prototyping philosophy of designing story and prototype, testing them as fast as possible, seeing where they work and fail, and then iterating to deliver well-branded relevant products. A light-hearted demo was presented by Adrian David Cheok, Director, Imagineering Institute & Chair Professor of Pervasive Computing, City University of London during his Everysense Everywhere Human Communication presentation, where he demonstrated the Kissenger, Thermal and Electric taste applications with the help of conference delegates.

10 innovators under the age of 35 took to the stage to present their elevator pitch at the conference, highlighting their work and research. EmTech Asia celebrated these 10 young innovators under the age of 35, recognised on the 2017 regional ‘Innovators under 35’ list by MIT Technology Review. Their inventions and research were found to be most ground breaking and exciting from more than 100 nominations from Southeast Asia, Taiwan, Australia and New Zealand.

For one of the Innovators Under 35, Dhesi Raja, Chief Scientist and Cofounder of Artificial Intelligence in Medical Epidemiology (AIME), the event turned into an opportunity to raise capital, “Emtech Asia (and Singapore) is definitely the next hub after Silicon Valley that you want to be part of, where great minds meet. Besides the mind blowing convergence of technology, engineering, medicine & entrepreneurship, a vast network of investors has also enabled us to verbally secure a deal worth S$ 200,000, just after a 3 minute pitch. Yes! This is the next valley! Singapore valley!”

Key sponsors and partners of EmTech Asia this year included Host Partner, Infocomm Media Development Authority of Singapore (IMDA); Diamond Sponsor, Accenture; Disruptive Innovation Partner, SGInnovate; Innovation Partner, Singapore-MIT Alliance for Research and Technology (SMART); Cybersecurity Partner, Analog Devices (ADI); Silver Sponsors L’Oréal Research & Innovation and SAP Innovation Center. Partners, MIT Professional Education, MIT Hacking Medicine, Solve and Workforce Singapore. Media Partners included Asia-Pacific Biotech News, Asian Scientist, Biotechin.Asia, Geeks in Cambodia, Research SEA, Startup Bangkok, The Tech Portal India and TechStorm TV.

EmTech Asia will return in January 2018. Visit www.emtechasia.com to learn more.

Digital Smell Interface

posted in: Media | 0

By Surina Hariri, Nur Ain Mustaffa, Muhd Khir Hafifi Muid, Sharon Kalu Joseph Ufere, Kasun Karunanayaka, Adrian David Cheok.

The digital stimulation of smell is considered as a useful step in expanding the technology related to multisensory communication. Previous methods for activating the sensation of smell chemically, has several disadvantages such as lower controllability, expensive, needed to be refilled, and being complex. In this project, we are researching on developing a new interface that can induce weak electrical pulses on the smell receptors and generate smell sensations (The concept of this interface is shown in Figure 01). We believe by using a weak electrical signal can excite the smell receptors and generate smell perceptions.

The sensitivity and effectiveness of electrical stimulation towards human smell receptors will be tested using a current controller device. The device as in Figure 2 is equipped with adjustable parameters; adjustable frequency and current to produce electrical pulses required. The stimulation process including putting a pair of customized silver electrodes inside the part of the nose where it touches the olfactory nerves.

Generally, the smell sensitive receptors are located near the olfactory bulb and nasal concha (The anatomy of the nasal conchae shown in Figure 3). There are three regions inside the nostrils called superior nasal concha, middle nasal concha and inferior nasal concha which are nearest to the olfactory receptors where stimulating electrode could be placed. Therefore, we mainly stimulate receptor cells in this area in purpose to trigger smell related perceptions in the human brain. The placement of electrodes will be done with a help of a medical expert in a way that electrodes would not come off quickly. These electrodes will be controlled by our own special designed circuit that can deliver few mili-amperes of current pulses to the smell sensitive cells.

In most of the olfactory system related studies examining electrical activity of the olfactory bulb, an adequate olfactory stimulus such as blowing odorous air into the nose has been used as a routine method of activating the olfactory bulb. Only few attempts has been made to do a electrical stimulation of the olfactory system. In 1961 Yamamoto has stimulated the human olfactory mucosa by electrical pulse to detect the bulbar potentials. Electrical stimulation (2 mA, 0.5 ms) of the human olfactory mucosa evoked a change in potential recorded from the frontal sector of the head. Ishimaru et al. has conducted an experiment in 1997. During that experiment the properties of the olfactory bulb potential evoked by electrical stimulation of the olfactory mucosa were studied in rabbits immobilized with d-tubocurarine. The evoked potential was a slow negative wave when recorded from the surface of the bulb. Therefore, this field is still remained as an untouched area for exploration of new possibilities until today. In 2002 Ishimaru et al. concluded that electrical olfactory evoked potential (EOEP) is suitable for electrophysiology. The relationship between the EOEP and Toyoda and Takagi’s perfumist’s strip method T&T olfactometry which is a standard Japanese means of psychophysical olfactometry are investigated. Electrical stimulation via bipolar electrodes (2mA, 0.5ms, 300 trials) is feed to the olfactory mucosa. 4 channels of EOEP are amplified, filtered (2 to 250Hz) and recorded. During electrical stimulation of right or left of the olfactory mucosa evoked an electrical olfactory evoked potential. However, there is no sense of smell occurred. Tali et al. also concluded that indiscriminate electrical stimulation of the olfactory mucosa does not produce olfactory perception but does alter activity in deep brain structures.

We hope in future, we will be able to develop a combined interface where we can effectively regenerate smell sensations digitally. This digital regeneration of smell will be useful for several industries like gaming, virtual reality, entertainment, online marketing, where people can create content, information, food related to smell that can be shared, learned, and experienced. In Medical industry this research will be useful to treat patients who are suffering from medical conditions like Ansomia and Parosmia.

 

Electric Smell Interface

Última parada: después del sexo con autómatas, casarse con un robot

posted in: Media | 0

Untitled

By  – 11 February 2017 – PlayGround

http://www.playgroundmag.net/futuro/sexo-robots-matrimonio-legal-2050-realdolls_0_1918608121.html

 

“En 2050 el sexo con robots será popular, las parejas humano-robot comunes y el matrimonio con robots legal”, aseguran estos expertos. De momento, los fabricantes de muñecas sexuales empiezan a usar Inteligencia Artificial para que las dolls simulen tener sentimientos

Sexbots

Hace veinte años, en el garaje de su casa, Matt McMullen —un joven artista de 26 años por aquel entonces— diseñó una muñeca femenina, le hizo fotografías y las colgó en Internet. Alguien que las vio le mandó un email para pedirle un encargo: quería una, pero a tamaño real y con la que pudiera practicar sexo.

Desde aquel primer maniquí sexy y asombrosamente realista, que para él era poco más que una broma, McMullen comprobó que existen miles de personas dispuestas a pagar varios miles de dólares para hacerse con una muñeca sexual. Abyss Creation, la empresa que levantó en un polígono industrial de San Marcos (California), ha vendido cerca de 8.000 unidades desde su fundación. Miles de real dolls que no son rasposos trozos de plástico, sino ‘chicas’ de piel de silicona para que su tacto no sea frío, con pecas, maquillaje, pechos de diferentes formas y tamaños a los que nunca afecta la ley de la gravedad y con un esqueleto metálico incorporado que les permite adaptar y mantener todo tipo de posturas. Muy pronto, además, sus clientes podrán tener conversaciones con sus dolls.

Y no, no nos referimos a ‘hablar solos’ mientras acarician el pelo de ellas.

“En 2050, el sexo con robots será popular, las parejas humano-robot comunes y el matrimonio entre ambos se aprobará en diferentes partes del mundo” (David Levy)

El nuevo proyecto de McMullen, que cuenta que lanzará al mercado a finales de año, promete revolucionar el mercado del sexo con no humanos. Su empresa ha trabajado con ingenieros en robótica para desarrollar una inteligencia artificial que integrará en sus dolls.

Matt quiere hacer posible que sus ‘chicas’ inertes hablen y recuerden, además de conseguir que puedan pestañear y mover la cabeza y los labios siguiendo las frases que pronuncian.

La innovación abre las puertas a una predicción que David Levy, especialista en Inteligencia Artificial, realizaba recientemente durante el segundo congreso Love and Sex With Robots celebrado en la Goldsmith University de Londres: que en 2050 el sexo con robots será popular, las parejas humano-robot comunes y el matrimonio con robots se aprobará en diferentes partes del mundo.

¿Suena difícil de creer? Bueno, expertos como Adrian Cheok, profesor de computación en la City University de Londres y director del Mixed Reality Lab, opinan que esa predicción no es para nada disparatada.

“Puede parecer una afirmación extravagante, porque estamos hablando a 35 años vista. Pero hace 35 años la gente pensaba que el matrimonio homosexual era intolerable”, comenta Cheok en declaraciones a Quartz. “Hasta los años 70, algunos Estados de EEUU no permitían el matrimonio entre ciudadanos blancos y negros. La sociedad progresa y cambia muy rápidamente”.

Aunque Cheok reconoce que los robots sexuales son, mayormente, una proyección de las fantasías sexuales masculinas más sexistas, a su juicio los matrimonios entre humanos y robots podría tener un efecto abrumadoramente positivo en la sociedad. “La gente asume que todo el mundo puede casarse, tener sexo, enamorarse. Pero en la realidad muchos no pueden”, señala. Pero incluso quienes pueden, incide Cheok, podrían encontrar valor en opciones diferentes. “Muchos matrimonios humanos son muy infelices. Si consideramos el contexto de un mal matrimonio, un robot siempre será mejor que un humano”.

I. Me siento atraído por un robot

“La inteligencia artificial dará un día a los robots un mayor poder de razonamiento del que hoy tienen. Y solo hay que considerar qué cosas hacen que un ser humano se fije en otro para replicarlas. ¿Cómo nos sentimos atraídos por otra persona? Belleza, inteligencia, riqueza, integridad personal… todo eso se aplicará a las relaciones robot-humanas. Un robot simulará corresponder los afectos humanos, y eso construirá una relación. Se creará una creíble apariencia de reciprocidad”, nos dice Chamari Edirisinghe, ingeniera informática del Imagineering Institute.

El futuro de las relaciones sexo-emocionales con robots avanza hacia las programaciones a la carta. ¿Deseas un sexbot tímido o extrovertido? ¿Inocente o intelectual? ¿Voluptuosamente sensual o más frío? Además de poder elegir una cara, un cuerpo o una clase de pezón, se podrán combinar diferentes características para configurar una personalidad que se va haciendo única debido a que el propio autómata aprende por su cuenta con cada conversación que mantiene.

¿Te ha ido bien en el trabajo?

¿Cómo estás hoy?

¿Eres feliz?

Acércate, un poco más cerca.

Un robot simulará corresponder los afectos humanos, y eso construirá una relación

La intimidad viene más por la forma en la que conectamos con alguien. Con los seres humanos, incluso si un encuentro sexual con otra persona es muy breve, sin ninguna otra forma de interacción excepto la sexual, sigue siendo íntimo. Con los robots sexuales estamos tratando de hacer la masturbación o el placer sexual individual más íntimo. Proporcionar una experiencia de inmersión será fundamental para que el usuario sienta que los sexbots están comprometidos”, explica Lynne Hall, investigadora del Departamento de Ciencias Informáticas de la Universidad de Sunderland.

Así que si el robot recuerda que te gustan las flores amarillas y comer pizza, parecerá un poco más de carne y hueso.

Si se gira para preguntarte cómo estás, pensarás que se preocupa.

Y si sus dedos te llevan a orgasmos superlativos y luego reís, lo sentirás como complicidad.

No tienen sentimientos, porque solo son un conjunto de ceros y unos, pero el desarrollo de su diseño consiste en crear la ilusión de que los poseen.

II. Quiénes guardarán un sexbot en el armario

Lars y una chica de verdad, la película de Craig Gillespie en la que Ryan Gosling interpreta a un hombre retraído que pasea, cuida y mima a una de las muñecas que hizo para el filme McMullen, puede dar una imagen equivocada de quiénes adquieren a las dolls de silicona o, en el futuro, a los sexbots. Para empezar porque, de momento, la mayoría de los clientes no se encuentran en la treintena, sino que están entre sus 55 y 65 primaveras.

“No es fácil esbozar un perfil”, puntualiza McMullen. “Normalmente se trata de personas que se sienten solas en su vida o arrastran mucho dolor de relaciones pasadas y la idea de la muñeca y los robots es segura para ellos para no sufrir otra decepción”. Pero quedarnos con esta imagen sería injusta, por demasiado reduccionista.

Padres de hijos autistas o con discapacidades compran estos productos para que puedan satisfacer sus necesidades. Psiquiatras los tienen en sus clínicas para tratar a acosadores sexuales. Parejas que quieren mejorar sus relaciones sexuales los utilizan. Vince Neil, el que fuera cantante de la banda de hard rock Mötley Crüe, se hizo con una.

La idea de la muñeca y los robots es segura para ellos para no sufrir otra decepción

Cualquiera, sin un perfil determinado, pero con las ganas de fundirse entre 5.000 y 10.000 dólares porque le da la gana, puede tener una real doll en su alcoba. O uno. Porque los male dolls también existen, aunque las mujeres representan un porcentaje mínimo en el volumen de las compras. Un 10%.

Algunos achacan este uso descompensado a una forma distinta de experimentar las relaciones. “Creo que en los hombres, la vista y el poder tocar juegan un papel clave mientras que las mujeres tiene mayor peso la fantasía y la imaginación, no solo lo físico”, opina McMullen, quien tiene la intención de desarrollar versiones masculinas de sus robots, pero más adelante.

Para otras voces, sin embargo, ese desequilibrio se debe a dos causas: un problema de género que impide a las mujeres liberarse sexualmente, sumado a que los robots sexuales, como gran parte de la tecnología que usamos hoy en día, han sido diseñados “por hombres y para hombres”, como señala Kate Devlin, feminista, experta en IA y defensora del uso de sexbots.

Una máquina es una pizarra en blanco. La sociedad se está replanteando el dualismo sexo/género. ¿Por qué un robot sexual debería ser binario? ¿Por qué no se exploran nuevas ideas inclusivas y de cambio social?”, plantea Devlin en uno de sus artículos.

III. El momento en el que un robot te rompe el corazón

Con la inminente aparición de esta clase autómatas diseñados para el sexo y el amor, surgen las primeras cuestiones morales. Cuestiones como calibrar si estas tecnologías pueden existir al margen de las leyes que regulan las relaciones sexuales entre humanos. Por ejemplo, ¿debería ser ilegal fabricar muñecas sexuales con aspecto de niñas? ¿O robots de tallas anoréxicas? ¿Deberían descartarse los encargos que piden copias exactas de personas reales? ¿Debería programarse a los robots para que tuvieran una especie de ‘conciencia’ moral?  

Oliver Bendel, filósofo y científico informático alemán, recuerda que el mes pasado, en el congreso Love and Sex with Robots, le hicieron las siguientes preguntas: ¿Es posible serle infiel a un humano con un robot sexual? ¿Debería un hombre o una mujer sentirse celoso por el affaire de su pareja con un autómata?

Su respuesta es que, a medida en que se avance en la sofisticación de estos robots, a medida que se mejore su capacidad de habla y sus movimientos, todavía limitados, la posibilidad de los celos y la sensación de infidelidad es inevitable.

Bendel subraya que los robots pueden hacernos sufrir a nosotros y a otros. “Con su cuerpo —con los miembros que lo componen— pueden dañarnos físicamente mientras se usan y sus capacidades lingüísticas pueden ofendernos con ciertas palabras o diciendo la verdad o la mentira”, manifiesta. Pueden generarnos incluso un doloroso sentimiento de frustración al caer nosotros en la cuenta de que nunca nos amarán realmente. Para él, los sexbots nunca serán esos compañeros perfectos libres de riesgos que dibujan otros.

Se recuerda que en 1997, cuando se comenzaban a vender las real dolls, Matt McMullen acudió al espectáculo televisivo de Howard Stern. Después de prometerle que le fabricaría una muñeca y entregársela, Stern declaró: “¡De las mejores relaciones sexuales que he mantenido en mi vida. Se siente que con una mujer de verdad!”.

Posiblemente aquella afirmación estuviera en el guión, fuera parte del show, pero 20 años después, a punto de darse un paso definitivo hacia los robots sexuales con inteligencia artificial, brota la duda de si reemplazarán a las personas. “Creo que en algunos casos sí, pero no serán muchos. Los robots no están diseñados para reemplazar a las personas sino para ser una alternativa y una experiencia”, aclara McMullen. “ Muchos me preguntan si pudiendo tener sexbots atractivos, como Scarlett Johansson o Brad Pitt, dejaremos de preferir la relación con los humanos. Es obvio que todos tenemos ideales, pero se nos olvida que existen conexiones todavía más importantes que no surgen del aspecto”.

Sonic Metaphors

posted in: Media | 0

By Predrag K. Nikolic, Adrian David Cheok

 

 

Sonic Interaction Design (SID) is an interdisciplinary design approach where sound has a main role in

development of user’s interactions with electronic devices or digital system and giving them

meanings. SID research falls within a diverse range of emerging disciplines and approaches

researching tactile, performative, and multisensory aspects of sonic experience.

 

Sonic Metaphors is research project directed toward exploration of how interactive user experience

can be enhanced if human values and cognition directs the process of designing content and

services. In particular we are using sonic interaction design and potentials of sound interfaces to

develop new interaction paradigm applicable to various contextual systems such as computer

games, smart housing, education and urban playful environments. It combines several art and design

works based on sound interactions: Interactive Installations Vrroom and Before & Beyond, In-Visible

Island and recently started set of In-Visible Sculptures where sound interactions and combined with

magnetic frictions as artistic medium.

 

Sonic interactions experience is designed upon relationship between objects, actions, and sounds

and as such should be considered as important element in studies which are directed toward sonic

interaction design research. We are proposing usage of sonic metaphors to give a new meaning to

these relationship, enrich user experience toward more synesthetic, memorable and meaningful.

 

Sonic Metaphors

You Might Be Able To Marry A Robot By 2050

posted in: Media | 0

logo

By Shannon Ullman – February 21, 2017 – Your Tango.

http://www.yourtango.com/2017300158/you-might-be-able-marry-sex-robot-by-2050

sex robot

 

Robots are the future… in more than one way.

Single people: everything is going to be OK. If you’ve given up on ever finding someone and have vowed to be alone with your cats until the end of time, you may want to reconsider.

With technology taking over pretty much everything these days, it really shouldn’t be a surprise that it may take over our love lives, too. I mean, sex robots are kind of a thing and virtual porn is heading towards a market takeover. It won’t be long before the robots who fill our sexual desires start to satisfy our emotional desires, too.

At a recent “Love And Sex With Robots” conference (it’s a real thing… seriously) at the University of London, David Levy, expert and author of a book on love between humans and robots, predicated that sex robot marriages would be legal by 2050. The director of the Mixed Reality Lab in Singapore, Adrian Cheok, said that the prediction seems pretty valid.

During his speech at the conference, Cheok was quoted as saying, “That might seem outrageous because it’s only 35 years away. But 35 years ago people thought homosexual marriage was outrageous… Until the 1970s, some states didn’t allow white and black people to marry each other. Society does progress and change very rapidly.”

I’ve got to say, the man makes a pretty good point. In fact, Cheok is full of logical wisdom and just listening to his point of view makes the whole robot-human marriage sound rather lovely.

He goes on to argue that marrying robots could make huge improvements to society as many human-to-human marriages are unhappy or end up in divorce. He also argues that while society assumes that everyone has the ability to meet someone, fall in love, get married and live happily ever after, this is simply not the case for a huge part of the population.

 


Giphy

 

When comparing a bad marriage to a human or a marriage to a robot, Cheok believes that the robot relationship is a much better option.

While there are still massive yet doable improvements to be made to sexrobots, the biggest challenge for making them lovable is the skill of conversation. By making robots look like they love you and, more importantly, make you FEEL like they love you, you are going to feel that same kind of connection as you would get with human love.

Humans are capable of loving pets and fictional characters, which is why Cheok believes that falling in love with robots is not a big stretch. Cheok offers up some pretty convincing arguments, but there are two sides to every story and others are not so convinced that human and robot marriages will be successful.

Professor at The University of Applied Sciences and Arts in Switzerland, Oliver Bendal, argues that love and sex robots will have no moral standing. He believes that the whole contract of marriage involves duties that robots can’t actually perform, such as taking care of the children. However, he somehow still believes that this marriage option can become legal by 2050, simply because of public pressures.

With the possibility of the government taking action against this kind of marriage, the professor thinks that anything could happen and that we should prepare ourselves for any outcome.

Falling in love and marrying a robot — the pro and con lists for this topic are both long. What do you think? Would marrying a robot ever seem appealing to you?

 

Kissenger

posted in: Media | 0

By Emma Yann Zhang , Adrian David Cheok

 

 

This research looks into sharing intimacy and emotion in digital communication

through mediated physical interactions, kissing in particular. It aims to extend our

sense of touch by creating an interactive kissing machine that produces dynamic haptic

force feedback and other sensory stimuli to transmit realistic kissing sensations

over a telecommunication network. The research takes a novel approach to affective

communication by constructing a mathematical model of kissing, including the forces,

dynamics and bilateral control of the forces to enable remote kissing in a communication

system.

 

A multimodal interactive system for remote kissing interaction is developed. The

kissing device is designed as an attachment device for mobile phones, allowing users

to kiss each other remotely while having a video chat using their mobile phones. It

measures and transmits real-time lip pressure over the Internet. The device consists

of force sensors that measure the the contact force between the lip surface of the

device and the user’s lips, as well as a linear actuator array to produce accurate force

feedback and lip movements during user interaction. A bilateral force controller is

used such that both users feel the reflections of their own lip forces as well as the

forces from each other. The system also engages the sense of smell by emitting body

scents or pheromones from the device.

 

The system provides a new communication channel for people to share intimacy

and affection through remote physical interaction. It engages a wide spectrum of our

sensory modalities, including touch and smell, thereby increasing the sense of telepresence

and allowing for deep emotional exchanges. While face-to-face interaction is

not always available in this increasingly globalised society, this system also offers an

effective and intuitive way for parents and grandparents to communicate with young

children who have limited language abilities, as well as with people with physical disabilities

or communication disorders. The outcome of this research could also make a

great impact in the area of robotics and artificial intelligence. The digitisation of kissing

provides exciting opportunities for human-robot relationships or even robot-robot

relationships. Robots will be able to possess emotional intelligence and learn to understand

the emotional meaning and pleasure of kissing, hence establishing intimate

and humanistic relationships.

 

Kissenger

Moody Hoody

posted in: Media | 0

By Nurafiqah Johari, Halimahtuss Saadiah, Kasun Karunanayaka, Adrian David Cheok

 

 

Moody Hoody is a hoody that can emit fragrance from the device inside it. It is a combination between fashion and technology. We mounted a Scentee Module inside the vest section of the hoody to emit a scented vapor on demand by pressing a button that located in the pocket of the Hoody. When you press the button, you can see vapor emitting from the vest of the Hoody through the Scentee Module. Scentee is the world’s first phone attachment that produce scent on demand by using an app (smart phone).

The main objective of this project is to lift the mood/spirit of the person who wears hoody and the people surround as the scent vapor is coming out from the hoody. Previous studies shows that certain fragrances can influence people’s mood and emotions. From the applications side this hoodie can be used for aroma therapy. Further, when you want to attract someone, you can wear the hoodie with a nice scent that can make people feels good when they are near to you. The colors of the hoody and its inner lining was specifically chosen to create some form of Mood for the person who wears the Hoody.

In the future, we plan to make moody hoody a whole lot slimmer, attaching an array of much flatter Scentees. All the wiring will be replaced with conductive threads. We also want to make the hoody connect through Bluetooth to the Smart Phone, so the person who wears the Hoody can always activate the release of the scented vapor through the smart phone. We hope that it will be able bring the clothing industry to a totally different level and become a very profitable marketable product in the “not so distant future”.

 

Moody Hoody

Teacherbot

posted in: Media | 0

 

 

Schools are meant to prepare learners for the future outside of school. Current developments

in AI, machine learning and robotics suggests a future of fully shared social spaces, including

learning environments (LEs), with robotic personalities. Today’s learners (as well as teachers)

should be prepared for such a future. AI in Education (AIED) has focused on implementation

of online and screen-based Pedagogical Agents (PAs); however, research findings support

richer learning experiences with embodied PAs, hence, recent studies in AIED have focused

on robot as peers, teaching assistants or as instructional materials. Their classroom uses

employ gamification approaches and are mostly based on a one-robot- one-student interaction

style whereas current educational demands support collaborative approaches to learning.

Robots as instructors are novel, considered a major challenge due to the requirements for

good teaching, including the demands for agency, affective capabilities and classroom control

which machines are believed to be incapable of. Current technological capabilities suggest a

future with full-fledged robot teachers teaching actual classroom subjects, hence, we

implement a robot teacher with capabilities for agency, social interaction and classroom

control within a collaborative learning scenario involving multiple human learners and the

teaching of basic Chemistry in line with current focus on STEM areas. We consider the PI

pedagogical approach an adequate technique for implementing robotic teaching based on its

design with inherent support for instructional scaffolding, learner control, conceptual

understanding and learning by teaching. We are exploring these features in addition to the

agentic capabilities of the robot and the effects on learner agency as well as improved

learning in terms of engagement, learner control and social learning. In the future, we will

focus on other key concepts in learning (e.g. assessment), other types of learners (e.g.

learners with cognitive/physical disabilities), interaction styles and LEs. We will also explore

and cross-community approaches that leverage on integration of sibling communities.

 

Teacherbot

1 2 3 4 5 20