Please vote for the brilliant Dr. Maria Tomas, Senior lecturer in SMCSE, @CityUniLondon for the 2019 British Photography Awards in the category of Documentary!

posted in: Media | 0

 

Please vote for the brilliant Dr. Maria Tomas, Senior lecturer in SMCSE, @CityUniLondon for the 2019 British Photography Awards in the category of Documentary! Please vote here -> https://www.britishphotographyawards.org/2019-Shortlist/Documentary/Tradition/8a2d6263-85e7-4a60-846c-6afd9857b656

Why do we think tiny things are cute? | Popular Science

posted in: Media | 0

Why do we think tiny things are cute?

There are a few reasons, but we’re hard-wired to find small things adorable.

 

By Dana G. Smith, August 28, 2018

https://www.popsci.com/why-do-we-think-tiny-things-are-cute

 

cute monkey looking in the mirror

 

What would inspire someone to painstakingly craft an inch-and-a-half-long burrito using dental tools? A hamster, of course. In the viral YouTube video “Tiny Hamster Eating Tiny Burritos,” a man prepares a chicken and single black bean burrito, then serves it to the rodent waiting at a jam-jar table. The diner pulls the burrito off a poker-chip plate and stuffs the entire thing into its mouth, cheeks puffed as if in satisfaction. It’s amazing.

Videos like this are shared all over the internet, with miniature birthday celebrations, romantic dates, and tiki parties starring cherubic animals in unlikely situations. The clips have accumulated millions of views. So why do we find these tiny tableaus so satisfying? In part, it’s because we’re engineered to appreciate the smaller things in life.

The protagonist is typically a small animal with a big head and big eyes, features collectively known as “baby schema”—a phrase coined in a 1943 paper by Austrian ethologist Konrad Lorenz. Human infants are the prototypical embodiment of baby schema. Because our babies are so helpless, Lorenz proposed, we evolved to find these characteristics cute so we’ll instinctually want to take care of them. This response helps our species survive. In fact, the power of baby schema is so strong, we’re even attracted to other beings with these traits.

“We’re not robots or computers,” says Adrian David Cheok, director of the Imagineering Institute in Malaysia, who has studied Kawaii, a culture prevalent in Japan that celebrates the adorable side of life. “Not only do we find other people’s children cute, we also find other animals cute, like puppies or kittens, because they have similar features to human babies.”

Research bears this out. Dozens of studies show that the smaller and more stereotypically “baby” a human or animal looks, the more we want to protect it. One investigation found that seeing pictures of baby animals makes us smile, while another discovered that photos of human infants trigger the nucleus accumbens, a brain region implicated in the anticipation of a reward. There’s even evidence that cute things help us concentrate and perform tasks better, theoretically because they sharpen the focus of our attention on the recipients of our care.

Our response to baby schema is so strong that it also spills out toward inanimate objects. In a 2011 study, researchers tweaked images of cars to make them embody the baby schema, with huge headlights and smaller grilles to reflect infants’ big eyes and small noses. College students smiled more at pictures of the baby-faced autos, finding them more appealing than the unaltered versions.

Mimicking chubby-cheeked critters to make goods more attractive might help sell cars, but not all little creatures have features manufacturers should imitate. Some small animals don’t exactly inspire our cuddle reflex—who wants to caress a cockroach? That’s partly because these beasties display traits (bitty heads, large bodies, and beady eyes) that don’t fit the baby schema. Sure, some people have a soft spot for “ugly cute” animals, including some species of spiders, but these still fall on Lorenz’s spectrum with big, bright peepers.

What about the things we squee over that don’t have eyes at all? Think of that darling burrito. What it lacks in a face, it makes up for in sheer artistry. “When you’re looking at [things] and seeing them as cute because they’re small, you’re also seeing them as cute because they’re cleverly made,” says Joshua Paul Dale, a faculty member at Tokyo Gakugei University and co-editor of the book The Aesthetics and Affects of Cuteness.

It makes sense then that the original meaning of “cute” was “clever or shrewd.” Simply put, we appreciate the craftsmanship of small things—it’s more difficult to make a burrito the size of a thumb than one as big as your forearm. A man examining his finished creation for flaws with a dentist’s mirror definitely meets that innovative criteria.

These tiny, carefully made items may also bring us joy because they make us want to play. Psychologists Gary Sherman and Jonathan Haidt theorize that cuteness triggers not just a protective impulse, but also a childlike response that encourages fun. To them, the desire to engage with cute things stems from our need to socialize children through play—an urge we transfer to adorable objects.

Craftsmanship and playfulness definitely factor in to why we find pint-size things so charming, but don’t discount the huge impact of their petite proportions. Miniature scenes make us feel powerful as viewers. Anthropologist Claude Lévi-Strauss suggests in The Savage Mind that we derive satisfaction from minuscule objects because we can see and comprehend them in their entirety, which makes them less threatening. Essentially, tiny towns, toy soldiers, and miniature tea sets make us feel like gods…or Godzillas.

That power, of course, is all in your head. The reason you smile as you build a ship in a bottle or watch videos like “Tiny Birthday for a Tiny Hedgehog” (Look it up. You’re welcome.) is that your brain is taking in the sight of that carefully frosted cake and small spiky body topped with a party hat and sending you mental rewards, causing you to feel formidable, focused, happy, and capable of keeping the weak and vulnerable alive. Yes, it means we are easily dominated by diminutive things, but so what? They’re adorable.

Does the Internet smell?

posted in: Media | 0

Image result for government technology

 

BY  OCTOBER 26, 2018

http://www.govtech.com/question-of-the-day/Question-of-the-Day-for-10262018.html

 

You can’t smell the food in that review you’re reading on Yelp, but one day you might be able to.

Researchers at the Imagineering Institute in Malaysia are working on creating “digital smell.” One day they want users to be able to smell what they’re seeing when they use their digital devices. Right now, though, the process involves putting a cable up the user’s nose in order to stimulate certain neurons in the nasal passage.

In order to get people to think they were smelling something, the research team needed to deliver electric currents to the olfactory epithelium cells about 7 centimeters above and behind the nostrils. Most of the volunteers reported fragrant or chemical smells, although some also reported fruity, sweet, toasted minty, and woody odors.

The next step is to find a less invasive way to administer the electricity, such as a much smaller cable or by skipping the nose entirely and stimulating the brain instead.

Benvenuti nell’internet degli odori

posted in: Media | 0
Di Federico Martelli | ott 22 2018, 9:19pm

Benvenuti nell’internet degli odori – Abbiamo parlato con i ricercatori che riescono a simulare gli odori con gli stimoli elettrici.

Una delle cose più frustranti di navigare su internet è che gli unici sensi coinvolti sono la vista e l’udito. Spesso però è un vero peccato che il tatto, l’olfatto e il gusto vengano esclusi dal flusso di informazioni digitali da cui siamo raggiunti. Anche questi sensi sono collegati alle emozioni e ai ricordi e contribuiscono a formare la nostra esperienza del mondo — senza contare che la loro inclusione nell’esperienza digitale aprirebbe una grande quantità di applicazioni, da quelle legate al mondo del cibo, fino a quello del sesso virtuale e, sopratutto, potrebbe concedere agli amanti che non hanno la possibilità di vedersi dal vivo di sentirsi più vicini.

Kasun Karunanayaka dell’Imagineering Institute in Malesia sta lavorando alla riproduzione digitale del senso dell’olfatto. Karunanayaka collabora con uno dei ricercatori più importanti in questo campo, Adrian Cheok che dirige l’Imagineering Institute — intervistato già in passato da Motherboard riguardo il suo lavoro per creare un “Internet multisensoriale” — e con la startup giapponese Scentee per realizzare un’app che aggiunge odori alle funzioni dello smartphone. Grazie alla tecnologia che ha elaborato, le persone con disturbi olfattivi potrebbero riacquistare alcune funzioni olfattive, oppure sperimentare esperienze in VR integrate con tecnologie di stimolazione cerebrale per fornire agli utenti un’esperienza sensoriale più ricca.

Uno dei suoi ultimi esperimenti ha l’obiettivo di riprodurre gli odori senza passare attraverso la stimolazione chimica delle cellule olfattive. Il suo team ha creato uno strumento che sfrutta gli stimoli elettrici. Mettendo a contatto degli elettrodi con le cellule dell’epitelio olfattivo che inviano informazioni dal nervo olfattivo al cervello e modificando la quantità e la frequenza degli stimoli elettrici, il suo team è riuscito a riprodurre una serie di sensazioni olfattive. L’obiettivo della sperimentazione è quello di arrivare in futuro a riprodurre gli stimoli raggiungendo direttamente il cervello, senza ficcare un tubo su per il naso della gente, dato che molti dei volontari che hanno partecipato ai test non riuscivano a sopportarlo. Ne ho parlato direttamente con Karunanayaka via mail.


Motherboard: Come funziona la stimolazione elettrica delle cellule nasali?
Kasun Karunanayaka: Lo scopo del nostro studio era stimolare elettricamente l’epitelio olfattivo umano e descrivere le sensazioni corrispondenti. La stimolazione elettrica può provocare la depolarizzazione nelle cellule nervose e quindi con un’ampiezza di depolarizzazione sufficiente può indurre delle sensazioni o delle reazioni. Si può affermare che la stimolazione elettrica dei recettori olfattivi può riprodurre alcune sensazioni olfattive allo stesso modo in cui può produrre sensazioni gustative (nella pratica nota come elettrogustometria.)

Cosa si intende per depolarizzazione? Su quali meccanismi biologici si basa il vostro studio?
Il naso umano è parte del sistema chemiosensoriale, che aiuta a discriminare una vasta gamma di odori e sapori. Quando le molecole odorose entrano nell’epitelio olfattivo, si legano ai recettori olfattivi. Poi, i recettori olfattivi innescano una serie di segnali all’interno delle cellule che si traducono nell’apertura e chiusura dei canali ionici. Questo aumenta la concentrazione di ioni positivi all’interno delle cellule olfattive (un effetto noto come depolarizzazione). Questo effetto fa sì che le cellule olfattive rilascino pacchetti di segnali chimici chiamati neurotrasmettitori, che danno vita a un impulso nervoso.

 

Cosa avete capito grazie al vostro studio?
Su circa un quarto dei partecipanti dei 31 partecipanti ai nostri test, le combinazioni di stimoli da 1 mA e 70 Hz e quella da 1 mA e 10 Hz hanno prodotto sensazioni olfattive di odori profumati, dolci e che risultavano chimici. I partecipanti hanno riferito invece di aver provato intense sensazioni di dolore e formicolio per le combinazioni di 1 mA e 180 Hz o 4 mA e 70 Hz. Una piccola parte di loro ha riferito di avere sperimentato dei flash visivi con una stimolazione di 4 mA e 70 Hz. Crediamo che questo tipo di risultati suggeriscono che ci può essere un percorso elettrico per riprodurre il senso dell’olfatto negli esseri umani.

Abbiamo in programma di estendere questo esperimento a un numero maggiore di partecipanti e continuare a lavorare con quelli che hanno già segnalato delle sensazioni olfattive. Vogliamo sottoporli a diversi parametri di stimolazione elettrica, modificando la frequenza, la corrente e il periodo di stimolazione. Prevediamo così di identificare vari modelli di stimolazione che possono riprodurre in modo efficace diverse sensazioni olfattive. Il passo successivo sarebbe quello di confrontare la differenza tra la percezione degli odori elettrici e la percezione degli odori naturali, studiando le parti del cervello che vengono attivate dalle stimolazioni corrispondenti. Se entrambe le tecniche di stimolazione dovessero attivare approssimativamente le stesse aree del cervello, potremmo sostenere che la stimolazione elettrica può riprodurre le stesse sensazioni olfattive che hanno una base chimica.

 

Come funzionavano gli strumenti che avete sviluppato in precedenza?
Questo è il primo dispositivo per riprodurre le sensazioni olfattive con stimoli elettrici che abbiamo realizzato ed è parte di un progetto di ricerca a lungo termine per riprodurre sensazioni di gusto e olfatto attraverso la realtà aumentata. Nel 2011, abbiamo presentato per la prima volta una tecnologia digitale per riprodurre i sapori attraverso la stimolazione elettrica. Abbiamo presentato di recente un’altra tecnologia per riprodurre i sapori che utilizza invece la stimolazione termica alla conferenza IEEE VR 2018. Invece, nel 2016, abbiamo proposto per la prima volta l’idea di riprodurre le sensazioni olfattive utilizzando la stimolazione elettrica. Successivamente, nel 2017 abbiamo sviluppato un Olfactometer da laboratorio — un sistema di emissione degli odori computerizzato su base chimica. Inoltre, abbiamo collaborato con Scentee.

A cosa state lavorando con Scentee?
Scentee è il primo device compatibile con mobile al mondo che riproduce gli odori. Si inserisce nel jack audio degli iPhone o dei dispositivi Android e può riprodurre odori o fragranze utilizzando applicazioni per smartphone. Il profumo viene rilasciato attraverso un motore a ultrasuoni che ha un serbatoio rimovibile. Il dispositivo può riprodurre solo un aroma alla volta. Il rilascio dell’aroma viene attivato attraverso un’input sul touchscreen, un messaggio di testo in arrivo o una notifica sui social network. Inoltre, può essere utilizzato in varie applicazioni come l’allarme della sveglia o per l’aromaterapia.

 

Come vedete il futuro di questo tipo di tecnologie? Con quale formato verrano trasferiti i dati?
Oggi le applicazioni di realtà aumentata si basano principalmente sull’audio e sul video ma la digitalizzazione del tatto e del gusto è già stata realizzata sperimentalmente a livello di ricerca e diventerà uno standard in futuro. Con la digitalizzazione dell’olfatto, potremo sperimentare digitalmente cinque sensi di base in realtà aumentata e l’esperienza utente diventerà più completa. Questo creerà più applicazioni e opportunità in campi come l’interazione umana con il computer, i videogiochi, la medicina e l’e-commerce.

Per quanto riguarda i dati che verranno trasferiti in formato digitale, nel caso di esperienze di tipo continuo, dovremmo trasmettere un flusso di dati digitali come facciamo per l’audio e il video. Tuttavia, la gamma dei valori e la struttura dei dati potrà essere definita solo dopo aver trovato i corretti parametri di stimolazione per riprodurre ogni sensazione olfattiva.

 

Interesting Engineering: This New Invention Lets You Smell Things Through Electricity Perfecting this idea would enable users to send smells over the internet.

posted in: Media | 0
By  October, 19th 2018

This New Invention Lets You Smell Things Through Electricity

Perfecting this idea would enable users to send smells over the internet.

This New Invention Lets You Smell Things Through Electricity

The idea of having a real-time change in smell during immersive experiences watching movies isn’t new. We can trace such an attempt into 1959 where a technology called AromaRama was used to send across smells across to the audience.

The benefit is increased engagement as people would get the smell of flowers when a scene revolves around a garden or have the scent of smoke during sequences that pertain to it like wars or bomb explosions. Needless to say, the technology didn’t gain much traction.

The age where we can induce smell electrically!

In 2018, we are capable of a much efficient method that could get us the same results. The researchers at the Imagineering Institute in Malaysia have found a new method that could help a person smell an occasion, and they plan to use it in AR and VR based applications.

Imagine where you could get a sense of smell through mixed reality experiences. The researchers are calling this Digital Smell. Currently, the researchers have managed to do this by bringing in thin electrodes in contact with the inner lining of the human nose.

Yes, the current version requires two wires to be inserted into your nostrils.

That said the researchers are working on creating a smaller form factor of this technology so that it can be easily carried and used. The idea for such an invention comes from Kasun Karunanayaka, who went on with this innovation as a project to acquiring his Ph.D. with Adrian Cheok, who is now serving as the director of the institute.

He is also gunning for similar innovation, as his dream is to create a multisensory internet.

Much tinkering is needed to create a near perfect form factor

The first version of the project involved chemical cartridges that mix and release chemicals to produce odors. But this was not what the team wanted moving forwards. They wanted to create a system that can produce scent through electricity alone.

The team also collaborates with a Japanese startup called Scentee to develop a smartphone gadget that can produce smells based on user inputs.

To create an all-electric system, the team experimented with exciting the human neurons. The test requires a wire to be inserted into their nose. When the exposed silver tip touches the olfactory epithelium, which is located approximately 3 inches into the nasal cavity, the researchers will send an electric charge into them.

“We’ll see which areas in the brain are activated in each condition, and then compare the two patterns of activity,” Karunanayaka said. “Are they activating the same areas of the brain?” If so, that brain region could become the target for future research.”

The researchers varied both amperage and frequency of the current to see the smell sensations that they would create. For certain electric combinations, the perceived smells were off fruity or chemical in nature.

Now the next part is to determine the exact parameters to create certain odors. Also, the team wants to redesign the device so that it is more comfortable for the users.

In their testing, many participants left because they felt the procedure to be very uncomfortable. Perfecting this tech has immense possibilities, helping people with smell disorders being one of them.

By including direct brain simulators in VR headsets, even content creators can help users smell things based on what they see.

Motherboard: These Researchers Want to Send Smells Over the Internet

posted in: Media | 0
By Samantha Cole | Oct 19 2018, 11:51pm

These Researchers Want to Send Smells Over the Internet

With electrodes up the nose, they made people smell things that weren’t there.

In the future, we could huff food blogs and snort stinky Twitter feeds straight into our sinuses.

Okay, I’ll admit that’s a highly exaggerated interpretation of new research by Kasun Karunanayaka, a senior research fellow at the Imagineering Institute in Malaysia, and his team. They’ve designed a concept for smelling digital content—like restaurant menu items or a florist’s rose bouquet—using electrical stimulation directly up your nostrils.

 

We’ve seen high-tech prototypes in the world of multisensory technology before: From molecule mixes that evoke the smell of New York in virtual reality, to “programmable” scent cartridges released during a movie, to gas masks for smelling sex while watching porn in VR. But most of these involve a chemical mix to make the scent. Instead of physical scent-mixing, Karunanayaka’s smellable internet involves sticking electrodes up your nose, to touch and stimulate neurons deep inside your nasal passages.

By varying the amount and frequency of the electrical currents, the researchers were able to evoke smells that weren’t there—but what test subjects actually perceived varied quite a bit, from person to person. Some described the smells as fruity, sweet, toasted minty, or woody, Karunanayaka told IEEE Spectrum. Others found the experiment so uncomfortable that they quit the trial after one session.

Shoving electrodes deep into nasal passages is obviously not the most user-friendly way to transmit digital smells, but the research team hopes to make the electrodes smaller and more flexible, or stimulate the brain directly, no invasive nose-cords required.

 

TECH CRUNCH – Researchers create virtual smells by electrocuting your nose by John Biggs @johnbiggs

posted in: Media | 0

BY John Biggs 

Researchers create virtual smells by electrocuting your nose

The IEEE has showcased one of the coolest research projects I’ve seen this month: virtual smells. By stimulating your olfactory nerve with a system that looks like one of those old-fashioned kids electronics kits, they’ve been able to simulate smells.

The project is pretty gross. To simulate a smell, the researchers are sticking leads far up into the nose and connecting them directly to the nerves. Senior research fellow at the Imagineering Institute in Malaysia, Kasun Karunanayaka, wanted to create a “multisensory Internet” with his Ph.D. student, Adrian Cheok. Cheok is Internet famous for sending electronic hugs to chickens and creating the first digital kisses.

The researchers brought in dozens of subjects and stuck long tubes up their noses in an effort to stimulate the olfactory bulb. By changing the intensity and frequency of the signals, they got some interesting results.

 

The subjects most often perceived odors they described as fragrant or chemical. Some people also reported smells that they described as fruity, sweet, toasted minty, or woody.

The biggest question, however, is whether he can find a way to produce these ghostly aromas without sticking a tube up people’s noses. The experiments were very uncomfortable for most of the volunteers, Karunanayaka admits: “A lot of people wanted to participate, but after one trial they left, because they couldn’t bear it.”

 

While I doubt we’ll all be wearing smell-o-vision tubes up our noses any time soon, this idea is fascinating. It could, for example, help people with paralyzed senses smell again, a proposition that definitely doesn’t stink.

 

These Researchers Want to Send Smells Over the Internet – Electrical stimulation of cells in the nasal passages produces sweet fragrances and chemical odors

posted in: Media | 0

By Eliza Strickland, 17 Oct 2018

 

These Researchers Want to Send Smells Over the Internet – Electrical stimulation of cells in the nasal passages produces sweet fragrances and chemical odors
A volunteer tries out a "digital smell" apparatus
Electrical stimulation of neurons high up in the nasal passages can cause people to perceive aromas that aren’t really there.

 

Imagine a virtual reality movie about the Civil War where you can smell the smoke from the soldiers’ rifles. Or an online dating site where the profiles are scented with perfume or cologne. Or an augmented reality app that lets you point your phone at a restaurant menu and sample the aroma of each dish.

The researchers who are working on “digital smell” are still a very long way from such applications—in part because their technology’s form factor leaves something to be desired. Right now, catching a whiff of the future means sticking a cable up your nose, so electrodes can make contact with neurons deep in the nasal passages. But they’ve got some ideas for improvements.

This digital smell research is led by Kasun Karunanayaka, a senior research fellow at the Imagineering Institute in Malaysia. He started the project as a Ph.D. student with Adrian Cheok, now director of the institute and a professor at the City University of London, who’s on a quest to create a “multisensory Internet.” In one of Cheok’s earliest projects he sent hugs to chickens, and his students have also worked with digital kisses and electric taste.

 

Karunanayaka says most prior experiments with digital smell have involved chemical cartridges in devices that attach to computers or phones; sending a command to the device triggers the release of substances, which mix together to produce an odor.

Working in that chemical realm, Karunanayaka’s team is collaborating with a Japanese startup called Scentee that he says is developing “the world’s first smartphone gadget that can produce smell sensations.” They’re working together on a Scentee app that integrates with other apps to add smells to various smartphone functions. For example, the app could link to your morning alarm to get the day started with the smell of coffee, or could add fragrances to texts so that messages from different friends come with distinct aromas.

But Karunanayaka’s team wanted to find an alternative to chemical devices with cartridges that require refilling. They wanted to send smells with electricity alone.

For his experiments, he convinced 31 volunteers to let him stick a thin and flexible cable up their noses. The cable was tipped with both a tiny camera and silver electrodes at its tip. The camera helped researchers navigate the nasal passages, enabling them to bring the electrodes into contact with olfactory epithelium cells that lie about 7 centimeters above and behind the nostrils. These cells send information up the olfactory nerve to the brain.

Typically, these olfactory cells are stimulated by chemical compounds that bind to cell receptors. Instead, Karunanayaka’s team zapped them with an electric current.

 

The digital smell apparatus includes a controller and a cable with a camera and electrodes on the tip

 

The researchers had previously combed the scientific literature [PDF] for examples of electrical stimulation of nasal cells, and found some reports that the stimulation caused test subjects to perceive odors. So they decided to experiment with different parameters of stimulation, altering both the amount and frequency of the current, until they found the settings that most reliably produced smell sensations.

The subjects most often perceived odors they described as fragrant or chemical. Some people also reported smells that they described as fruity, sweet, toasted minty, or woody.

This experiment was a very basic proof-of-concept, Karunanayaka says. The next step is to determine whether certain stimulation parameters are reliably linked to certain smells. He must also investigate how much variability there is between subjects. “There may be differences due to age, gender, and human anatomy,” he says.

The biggest question, however, is whether he can find a way to produce these ghostly aromas without sticking a tube up people’s noses. The experiments were very uncomfortable for most of the volunteers, Karunanayaka admits: “A lot of people wanted to participate, but after one trial they left, because they couldn’t bear it.”

 

The digital smell experiment setup

 

Two possible solutions suggest themselves, Karunanayaka says: They could make the insert smaller, more flexible, and less unbearable. Or they could skip past the nose’s olfactory cells and directly stimulate the brain.

As a step toward that neurotech goal, the Imagineering Institute researchers are planning a brain-scanning collaboration with Thomas Hummel, a leading expert in smell disorders at the Technische Universität Dresden in Germany. In the planned experiment, volunteers will both smell real odiferous objects, such as a rose, and also receive nasal stimulation. All these sniffs will take place while the volunteers are getting their brains scanned by a noninvasive method such as fMRI.

“We’ll see which areas in the brain are activated in each condition, and then compare the two patterns of activity,” Karunanayaka says. “Are they activating the same areas of the brain?” If so, that brain region could become the target for future research. Maybe the researchers could use a headset that provides a noninvasive form of stimulation to trigger that brain region, thus producing smell sensations without the need for either a rose or a nose-cable.

Such tech could serve a restorative purpose: People with smell disorders could theoretically wear some headgear to regain some smell functions. And for people with intact sniffer systems, it could provide enhancements: For example, VR headset makers could build in the brain-stimulating tech to provide users with a more immersive and richer sensory experience.

ACE is extremely excited to announce our new steering committee

posted in: Media | 0

ACE is extremely excited to announce our new steering committee

 

The new ACE steering committee will bring profound and amazing experiences to the ACE conference. We look forward to a new fantastic future for ACE, making it the best entertainment computing conference in the world.

 

 

ACE New steering committee

 

Adrian David Cheok (Steering Committee Chair, Founder of ACE conference)

Adrian David Cheok, who was born and raised in Adelaide, Australia, graduated from the University of Adelaide with a Bachelor of Engineering (Electrical and Electronic) with First Class Honors in 1992 and an Engineering PhD in 1998.  He is Director of the Imagineering Institute, Malaysia, and Chair Professor of Pervasive Computing at City, University of London.

He is Founder and Director of the Mixed Reality Lab, Singapore. He was formerly Full Professor at Keio University, Graduate School of Media Design and Associate Professor in the National University of Singapore. He has previously worked in real-time systems, soft computing, and embedded computing in Mitsubishi Electric Research Labs, Japan.

He has been a keynote and invited speaker at numerous international conferences and events. He was invited to exhibit for two years in the Ars Electronica Museum of the Future, launching in the Ars Electronica Festival 2003 and 2017. His works “Human Pacman”, “Magic Land”, and “Metazoa Ludens”, were each selected as one of the world’s top inventions by Wired and invited to be exhibited in Wired NextFest 2005 and 2007.

He was awarded the Hitachi Fellowship, the A-STAR Young Scientist of the Year Award, and the SCS Singapore Young Professional of the Year Award. He was invited to be the Singapore representative of the United Nations body IFIP SG 16 on Entertainment Computing and the founding Chairman of the Singapore Computer Society Special Interest Group on Entertainment Computing. He was awarded an Associate of the Arts award by the Singapore Minister for Information, Communications and the Arts. He is a Fellow in Education, World Technology Network. He was awarded a Microsoft Research Award for Gaming and Graphics. He received the C4C Children Competition Prize for best interaction media for children, the Integrated Art Competition Prize by the Singapore Land Transport Authority, Creativity in Action Award, and a First Prize Nokia Mindtrek Award. He received a First Prize in the Milan International InventiON competition. He received an SIP Distinguished Fellow Award which honors legendary leaders whose illustrious lives have positively influenced lives across generations and communities around the globe. He was awarded Young Global Leader by the World Economic Forum. This honor is bestowed each year by the World Economic Forum to recognize and acknowledge the top young leaders from around the world for the professional accomplishments, commitment to society and potential to contribute to shaping the future of the world. He was awarded “Honorary Expert” by Telefonica and El Bulli, the number one restaurant in the world. He is a Fellow of the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA), an organisation which is committed to finding innovative practical solutions to today’s social challenges. His research on smell interfaces was selected by NESTA as Top 10 Technologies of 2015. In 2016, he was awarded the Distinguished Alumni Awards by University of Adelaide, in recognition of his achievements and contribution in the field of Computing, Engineering and Multisensory communication. In 2017, he entered the elite list of The h-Index for Computer Science, a list that contains only the top 0.06% of all computer scientists in the world. In 2018, he was awarded Albert Nelson Marquis Lifetime Achievement Award. Remote kissing gadget ‘Kissenger’ was selected to the Top 100 Science Spinoffs.

He is Editor in Chief of the academic journals: Advances in Robotics and Automation, Transactions on Edutainment (Springer), ACM Computers in Entertainment, and Lovotics: Academic Studies of Love and Friendship with Robots, and Multimodal Technologies and Interaction. He is Associate Editor of Advances in Human Computer Interaction, International Journal of Arts and Technology (IJART), Journal of Recent Patents on Computer Science, The Open Electrical and Electronic Engineering Journal, International Journal of Entertainment Technology and Management (IJEntTM), Virtual Reality (Springer-Verlag), International Journal of Virtual Reality, and The Journal of Virtual Reality and Broadcasting.

 

 

Yair Goldfinger

Yair Goldfinger Co-Founded AppCard, Inc. in 2011 and serves as its Chief Executive Officer. He. Co-Founded Dotomi, Inc. in 2003 and served as its Chief Technology Officer and as a Director of Jajah Inc. He serves as Advisor of Talenthouse, Inc and an Advisor to the Board of Volicon, Inc. He has deep technology expertise in creating personal, relevant and timely one-to-one messaging channels. He was also Co-Founder of Odysii Inc. (also known as Odysii Ltd.) and of Mirabilis/ICQ the world’s first Internet-wide instant messaging service, which was acquired by AOL in 1998. He served as Vice President of R&D and Chief Technology Officer of Mirabilis/ICQ. He serves as the Chairman of the Board at Strategy Runner (US) Limited, as Chairman of Medipower Overseas Public Company Limited, and has been its Non-Executive Director since 2008. He is a Director of PicScout Inc and PicApp Technologies Ltd, as a Director at FiTracks, The Consumer Media Group, Inc., and Silent Communication Ltd. He was granted the Wharton Infosys Business Transformation Award in 2005. Mr. Goldfinger holds a BA in Math and Computer Science from the University of Tel Aviv.

 

 

Jaap van den Herik

Jaap van den Herik studied mathematics (with honours) at the Vrije Universiteit Amsterdam and received his PhD degree at Delft University of Technology in 1983. In 1984 he was visiting professor at the McGill School of Computer Science in Montreal. Thereafter, he was subsequently affiliated with Maastricht University (1987- 2008) and Tilburg University (2008-2016) as full professor in Computer Science. He is the founding director of IKAT (Institute of Knowledge and Agent Technology) and TiCC (Tilburg center for Cognition and Communication) and was supervisor of 71 PhD researchers.

At Leiden University, Van den Herik was affiliated with the department of Computer Science (now LIACS) between 1984 and 1988. He became professor of Computer Science and Law in 1988, at the Center for Law in the Information Society (eLaw). Since 2012, he has also been a fellow professor at the Centre for Regional Knowledge Development (CRK), for the supervision of PhD students. Furthermore, he has been part of the Leiden Institute of Advanced Computer Science (LIACS) since 2014, where he c0-founded the Leiden Centre of Data Science (LCDS.

Van den Herik’s research interests include artificial intelligence, intelligent legal systems, big data and social innovation. In 2012, he received an ERC Advanced Grant together with Jos Vermaseren (PI, Nikhef) and Aske Plaat, for the research proposal “Solving High Energy Physics Equations using Monte Carlo Gaming Techniques.” Van den Herik received a Humies Award in 2014, for his work on chess programming.

Van den Herik is active in many organizations and advisory boards, such as the Belgian Netherlands Association of AI, JURIX, the International Computer Games Association, ToKeN, Catch and the consortium BiG Grid. Furthermore, he is a fellow of the European Coordinating Committee for AI (ECCAI), member of TWINS (the research council for sciences of the KNAW) and member of the Royal Holland Society of Sciences and Humanities.

 

 

Hiroshi Ishiguro

Hiroshi Ishiguro received a D. Eng. in systems engineering from Osaka University in 1991. He is currently Professor of the Department of Systems Innovation in the Graduate School of Engineering Science at Osaka University (2009-) and  Distinguished Professor of Osaka University (2017-). He is also visiting Director (2014-)  and group leader: (2002-2013) of Hiroshi Ishiguro Laboratories at the Advanced Telecommunications Research Institute, and an ATR fellow. His research interests include sensor networks, interactive robotics, and android science.

Professor Ishiguro is director of the Intelligent Robotics Laboratory, part of the Department of Systems Innovation in the Graduate School of Engineering Science at Osaka University. A notable development of the laboratory is the Actroid, a humanoid robot with lifelike appearance and visible behaviour such as facial movements.

In robot development Ishiguro concentrates on the idea of making a robot that is as similar as possible to a live human being; at the unveiling in July 2005 of the female android named Repliee Q1Expo, he was quoted as saying “I have developed many robots before, but I soon realised the importance of its appearance. A human-like appearance gives a robot a strong feeling of presence. … Repliee Q1Expo can interact with people. It can respond to people touching it. It’s very satisfying, although we obviously have a long way to go yet. In his opinion” it may be possible to build an android that is indistinguishable from a human, at least during a brief encounter.”

Ishiguro has created an android that resembles him, called the Geminoid. The Geminoid was among the robots featured by James May in his 5 October 2008 BBC2 documentary on robots Man-Machine in the TV series Big Ideas. He also introduced a telecommunication robot called the Telenoid R1. Hiroshi also uses his android to teach his classes at Osaka University and he likes to scare his students by making the Geminoid perform human-like movements such as blinking, “breathing” and fidgeting with his hands. Ishiguro has been listed as one of the 15 Asian Scientists To Watch by Asian Scientist Magazine on 15 May 2011.  In the 2018 documentary on artificial intelligence – Do You Trust This Computer? – Professor Ishiguro is interviewed and is seen interacting with one of his robots.

 

David Levy

David Levy studied Pure Mathematics, Statistics, and Physics at St. Andrews University, Scotland, from where he graduated with a B.Sc. degree. He taught practical classes in computer programming at the Computer Science Department of Glasgow University, before moving into the world of business and professional chess playing and writing. (He wrote more than thirty books on chess.) He was selected to play for Scotland in six World Student Team Chess Championships (1965-1970) and in six Chess Olympiads (1968-1978). In 1968 and 1975 he won the Scottish Chess Championship. He was awarded the International Master title by FIDE, the World Chess Federation, in 1969, and the International Arbiter title in 1976.

The development of David’s interest in Artificial Intelligence started with computer chess, which was a logical combination of his addiction to chess and his work in the field of computing. In 1968 he started a bet with four Artificial Intelligence professors, including John McCarthy who in 1955 had coined the phrase “Artificial Intelligence”, that he would not lose a chess match against a computer program within ten years. He won that bet, and another one for a further five years, succumbing only twenty-one years after making the first bet, and then to a forerunner of the program that defeated Garry Kasparov in 1997. David was first elected President of the International Computer Chess Association (ICCA) in 1986, and after a gap from 1992 to 1999 was elected once again, a position he has held since then (the association now being named the International Computer Games Association (ICGA)).

Since 1977 David has led the development of more than 100 chess playing and other microprocessor-based programs for consumer electronic products. He still works in this field, leading a small team of developers based mainly in the UK.

David’s interest in Artificial Intelligence expanded beyond computer games into other areas of AI, including human-computer conversation. In 1994 he brought together a team to investigate pragmatic solutions to the problem, resulting in his winning the Loebner Prize competition in New York in 1997. He won the prize again in 2009.

David’s achievements in the associated field of Social Robotics include founding international conferences on the subject, and being a co-organizer of six such conferences between 2007 and 2017. He has published a primer on A.I., Robots Unlimited. His fiftieth book, Love and Sex with Robots, was published in November 2007, shortly after he was awarded a PhD by the University of Maastricht for his thesis entitled Intimate Relationships with Artificial Partners.

David has had a lifelong interest in organising mind sports events, and was one of the organisers of the World Chess Championship matches in London (1986 and 1993), as well as the World Checkers Championship match between the human champion and a computer program (1992 in London and 1994 in Boston), in addition to dozens of computer chess championships and similar events. In 1989 he inaugurated the Computer Olympiad, for competitions between computer programs playing thinking games, which has since become an annual event. David also created the Mind Sports Olympiad, in which human players compete at more than 30 different strategy games and other “mind sports”.

His hobbies include classical music, and he has recently started playing chess again after a long gap away from active play.  He lives in London with his wife and their cat.

 

 

Cristina Portalés

Dr Cristina Portalés (PhD in Surveying and Geoinformation, with specialization in Augmented Reality, 2008; IEEE Computer Society member) has been recently (2012-2015) a Juan de la Cierva post-doc fellow at the Institute of Robotics and Information and Communication Technology (IRTIC) at Universitat de València (Spain), where she currently works as full PhD senior researcher. She was formerly graduated with a double degree: Engineer in Geodesy and Cartography from the Universidad Politécnica de Valencia (Spain) and MSc in Surveying and Geoinformation from the Technische Universität Wien (Austria), with the specialization in photogrammetry/computer vision. She obtained her first diploma degree (Bachelor) with honours, for having the best academic record, being awarded with the San Isidoro prize. She was an ERASMUS, PROMOE and Leonardo da Vinci research fellow at the Institute of Photogrammetry and Remote Sensing (Vienna, 1999-2002), a PhD research fellow at the Mixed Reality Laboratory of the University of Nottingham (UK, 2005) and at the Interaction and Entertainment Research Centre of the Nanyang University of Singapore (2006). She received an outstanding PhD. Award by the UPV. First woman receiving the EH Thompson Award (best paper), given by the Remote Sensing and Photogrammetry Society (2010). During 2008-2010 she worked at the Photogrammetry and Laser Scanning Research Group (GIFLE) of the UPV, and during 2011-2012 she at the Technological Institute of Optics, Colour and Imaging (AIDO), being primarily involved in computer-vision related projects and in the project FP7-SYDDARTA, coordinating the technical work of the WP dedicated to software implementation and carrying out managerial tasks. She has been designated (since 2014) as the proposal coordinator for her research group (ARTEC). She is author of more than 60 scientific publications including international conferences, high impact journals, books and book chapters. She has been invited speaker by Univ. Granada, Aula Natura, UNITEC (Honduras), RUVID & Univ. Gjøvik (Norway). She is S&T program committee of diverse international conferences (e.g. ACM SIGCHI ACE, GECCO), highlighting her involvement in the IEEE ISMAR (CORE A*) for taking decisions on the selected papers. She is also reviewer of scientific journals with impact factors (e.g. MDPI Sensors, Springer Journal of Digital Imaging, Elsevier Computers in Industry). Cristina has co-organized the successful ACM Advances in Computer Entertainment Technology Conference 2005, and has been an. Expert evaluator of FP7 and H2020 proposals. She is Deputy Editor-in-Chief of the scientific journal Multimodal Technologies and Interaction (MTI), and Editor-in-Chief of the International Journal of Virtual and Augmented Reality.

 

 

Yinglan Tan

Yinglan Tan is a Singaporean businessman and writer. He was a venture partner with Sequoia Capital till June 2017. He is the author of The Way Of The VC – Top Venture Capitalists On Your Board. and “Chinnovation – How Chinese Innovators are Changing the World”, both published by John Wiley & Sons and New Venture Creation – Entrepreneurship for the 21st Century – An Asian Perspective, published by Mcgraw-Hill.

Yinglan Tan founded Insignia Venture Partners in 2017, an early stage technology venture fund focusing on Southeast Asia.  Prior to founding Insignia Venture Partners, Yinglan was Sequoia Capital’s first hire and Venture Partner in Southeast Asia. He sourced multiple investment opportunities for Sequoia including Tokopedia, Go-jek, Traveloka, Carousell, Appier, Dailyhotel, Pinkoi and 99.co. Prior to joining Sequoia Capital, Yinglan was a member of the elite Singapore Administrative Service, where he served in a variety of positions in the National Research Foundation, Prime Minister’s Office (where he was part of a team that managed a S$360 million fund for innovation and enterprise), Ministry of Trade and Industry (where he was the deskhead for Economic Development Board Investments portfolio), and Ministry of Defence (where he was the recipient of the National Innovation and Quality Circle Award). He had previously been the founding Director of 3i Venturelab (China) at INSEAD, a joint-venture between private equity firm 3i (LSE:III) and INSEAD. Yinglan was also the Special Assistant to the Chief Economic Advisor of the World Bank, as a Milton and Cynthia Friedman Fellow.

Tan has been named as a World Economic Forum (WEF) Young Global Leader (2012–2017) and served as a Selection Committee Member on the WEF Technology Pioneers (2015 – 2017), and WEF Global Agenda Council member on Fostering Entrepreneurship (2011 – 2014) and he is an Honorary Adjunct Associate Professor at National University of Singapore and serves on the Strategic Research Innovation Fund Investment Committee at Nanyang Technology University.

 

 

Yorick Wilks (Senior Research Scientist)

 

Professor . Yorick Wilks is a Senior Research Scientist at the Institute for Human and Machine Cognition (IHMC), Professor of Artificial Intelligence at the University of Sheffield, and Senior Research Fellow at the Oxford Internet Institute at Balliol College.

He studied math and philosophy at Cambridge, was a researcher at the Stanford AI Laboratory, and was Professor of Computer Science and Linguistics at the University of Essex. Following this, he moved back to the United States for ten years to run a successful, AI laboratory in New Mexico—the Computing Research Laboratory (CRL)—established by the state of New Mexico as a center of excellence in AI in 1985. His own research group at CRL was rated among the top five in the US in its area by the lab’s International Advisory Board, and it became totally self-supporting with grants by 1990.

In 1993 he took up a chair of AI at the University of Sheffield, and became founding Director of the Institute of Language, Speech and Hearing (ILASH). Since then he has raised over $50 million in grants from UK research councils and the European Community, and the Sheffield Natural Language Processing Research Group constitutes a major UK group in the area.

Professor Wilks has led numerous UK, US and EC initiatives, including the UK-government funded Interdisciplinary Research Centre AKT (2000-2006) on active knowledge structures on the web (www. aktors.org). He has published numerous articles and nine AI books, including “Electric Words: dictionaries, computers and meanings” (1996 with Brian Slator and Louise Guthrie) and “Machine Translation: its scope and limits,” in 2008. His most recent book is “Close Encounters with Artificial Companions” (2010).

He is a Fellow of the American and European Associations for Artificial Intelligence and on the boards of some fifteen AI-related journals. He designed CONVERSE, a dialogue system that won the Loebner prize in New York in 1997, and he was the founding Coordinator of the EU 6th Framework COMPANIONS project on conversational assistants. A Companion keeps track of detailed knowledge of its owner as well as the wider world; its current major implementation is as an elicitor and organizer of personal knowledge and digital records, but the general concept is being adapted to learning, health and travel environments. His Companion-based work continues at IHMC as a part of the Tampa VA Smart Home initiative.

In 2008 Dr. Wilks was awarded the Zampolli Prize at the International Conference on Language Resources and Evaluation (LREC-08) in Marrakech, and the Association for Computational Linguistics (ACL) Lifetime Achievement Award at ACL-08 in Columbus, Ohio. In 2009 he was awarded the Lovelace Medal by the British Computer Society, and elected a Fellow of the ACM.

 

 

Mark Winands

Mark Winands is an Associate Professor at the Department of Data Science & Knowledge Engineering, Maastricht University. He received the Ph.D. degree in Artificial Intelligence from the Department of Computer Science, Maastricht University in 2004. His research interests include heuristic search, machine learning and games. He regularly serve on program committees of major AI and computer games conferences. He is an editor-in-chief of the ICGA Journal and an associate editor of IEEE Transactions on Computational Intelligence and AI in Games. He is author of various game playing programs, most notably his Lines of Action program MIA, successfully competing at ICGA tournaments. Mark researched on Proof-Number Search, and introduced Enhanced Forward Pruning, which applies Forward Pruning Techniques, such as Null Move Pruning and Multi-Cut.

 

 

William Yeager

William “Bill” Yeager is an American engineer. He is best known for being the inventor of a packet-switched, “Ships in the Night,” multiple-protocol router in 1981, during his 20-year tenure at Stanford’s Knowledge Systems Laboratory and the Stanford University Computer Science department. The code was licensed by upstart Cisco Systems in 1987 and comprised the core of the first Cisco IOS.

He is also known for his role in the creation of the IMAP mail protocol. In 1984 he conceived of a client/server protocol, designed its functionality, applied for and received the grant money for its implementation. In 1985 Mark Crispin was hired to work with Bill on what became the IMAP protocol. Along with Mark, who implemented the protocols details and wrote the first client, MMD, Bill wrote the first Unix IMAP server. Bill later implemented MacMM which was the first MacIntosh IMAP client. Frank Gilmurray assisted with the initial part of this implementation.

At Stanford in 1979 Bill wrote the ttyftp serial line file transfer program, which was developed into the MacIntosh version of the Kermit protocol at Columbia University. He was initially hired in August 1975 as a member of Dr. Elliott Levanthal’s Instrumentation Research Laboratory. There, he was responsible for a small computer laboratory for biomedical applications of mass spectrometry. This laboratory, in conjunction with several chemists and the Department of inherited rare diseases in the medical school, made significant inroads in identifying inherited rare diseases from the gas chromatograph, mass spectrometer data generated from blood and urine samples of sick children. His significant accomplishment was to complete a prototype program initiated by Dr. R. Geoff Dromey called CLEANUP. This program “extracted representative spectra from GC/MS data,” and was later used by the EPA to detect water pollutants.

From 1970 to 1975 he worked at NASA Ames Research Center where he wrote, as a part of the Pioneer 10/11 mission control operating system, both the telemetry monitoring and real time display of the images of Jupiter. After his tenure at Stanford he worked for 10 years at Sun Microsystems.

At Sun as the CTO of Project JXTA he filed 40 US Patents, and along with Rita Yu Chen, designed and implemented the JXTA security solutions. As Chief Scientist at Peerouette, Inc., he filed 2 US and 2 European Union Patents. He has so far been granted 20 US Patents 4 of which are on the SIMS High Performance Email Servers which he invented and with a small team of engineers implemented, and 16 on P2P and distributed computing. In the Summer of 1999 under the guidance of Greg Papadopoulos, Sun’s CTO, and reporting directly to Carl Cargill, Sun’s director of corporate standards, he led Sun’s WAP Forum team with the major objective, “… to work with the WAP Forum on the convergence of the WAP protocol suite with IETF, W3C and Java standards.”

Bill received his bachelor’s degree in mathematics from the University of California, Berkeley in 1964; his master’s degree in mathematics from San Jose State University, San Jose, California, in 1966; and completed his doctoral course work at the University of Washington in Seattle, Washington in 1970. Then decided to abandon mathematics for a career in software engineering and research to the skepticism of his thesis advisor because Bill thought the future was in computing.

1 2 3 4 35