Macron, the maker of Virtual Mouse, Foldable HMD and VR Gesture Player was founded in 2003, and the company started as an inspection equipment maker, using image processing technology. Undertaking project with ten team members of engineers focusing in R&D, the team came up with the gesture recognition technology.
Macron’s virtual mouse is a gesture recognition input device that allows menu selection on a display device by recognizing and tracking motions with bare hands. Due to high price and size of a 3D depth camera, the standard methods in using a 3D depth camera have restrictions. Macron’s extraordinary virtual mouse is a gesture recognition input device which removes this inconvenience. It allows menu selection on a display device through recognizing and tracking bare hand movement. Macron’s hand gesture recognition methods with a single 2D camera makes it possible to implement gesture recognition functions at a low cost. Macron’s virtual mouse can be used in game apps, web surfing and any devices that require touchless control.
SoBro is a table that knows about your devices and wirelessly connects to them. It charges your phone, can connect to the TV, plays music to wireless bluetooth speakers and even chills your water bottle.
Sobro is a 43 x 23 inch coffee table with a tempered glass top. It comes with a 24 bottle refrigerated drawer enhanced with a compressor, to cool and chill drinks and food at a user’s desired temperature. Bluetooth speakers that could play music from a nearby source; 2 USB charging ports alongside dual integrated 110V power outlets for convenient power supply to devices such as laptop, smartphone, percolator (for tea or coffee); and LED lights on the underside.
A summary of SoBro coffee table specifications:
- A 24 bottle refrigerated drawer
- 2 smaller drawers
- Bluetooth powered speakers
- Bluetooth connectivity to TV
- 2 USB charging ports
- 2 power outlets
- LED lighting on the underside
An Australian laundry detergent brand called OMO unveiled “Peggy,” a smart clothespin that promises to “revolutionize the way you do your laundry,” ensuring that you never again leave your clean clothes on the line when it starts to rain.
Peggy boasts fancy features like light, temperature, and humidity sensors as well as Wi-Fi, a lithium-ion battery, and USB charging port. It connects to your smartphone and tells you when your clothes are finished drying.
Peggy also checks the weather to let you know if it’s okay to dry your clothes outside. It will tell you the best time to wash your clothes, how long will it take to dry them outside (based on current weather), and will alert you if conditions change.
Scientists have been able to create a machine learning system that can help robots be trained under the supervision of a human assistant, an application set to have military uses.
Researchers at the US Army Research Laboratory and the University of Texas at Austin considered a specific case where a human provides real-time feedback in the form of critique. Ideally, the team created an algorithm that mimics the brain. How the brain learns after viewing videos and other practical lessons. As in, you wanna teach the robot how to jump or hold a gun properly -simply have it watch the video on your YouTube channel. Technically speaking, this is an extension of TAMER (Training an Agent Manually Though Evaluative Reinforcement). Therefore, the algorithm is called Deep TAMER.
As you may know, for any concept to be considered valid it must have passed the test of its founding principle. So the researchers demonstrated their invention using 15 minutes of human-supplied feedback to teach an agent to perform smarter than humans on the Atari game. The deep-TAMER-trained agents delivered super performance, besting both an expert human Atari player as well as knockout the amateur trainees.
The future army will be comprised of soldiers and autonomous teammates. It will be one team working side-by-side. Training will be conducted through videos and both the artificially intelligent agents as well as the humans will be offered an opportunity to prove what they’ve learned –with the intention of applying the same in new environments that they have not been in before.
Apple’s ARKit augmented reality platform just got better thanks to ARKit 1.5, which has rolled out to developers in beta.
The upgrade adds a big new feature to AR developers’ toolset: wall detection. Previous versions of ARKit only focused on horizontal plane detection, meaning that it only able to detect floors for objects to be placed onto. With the new upgrade, AR developers will now be able to add walls into the mix — for instance, creating a game in which you throw darts at a wall-mounted board.
The ugrade also allows better recognition for oddly-shaped objects like circular tables or chairs. In addition, it improves tracking speed and accuracy, along with the ability to parse 2D images within a 3D scene (say, a poster on a wall) to map it in physical space.
On top of this, resolution is improved so that ARKit will now look at the “real world” in 1080p, rather than 720p. That means that there should now be less disparity between the “fake” parts of an ARKit scene (which, oddly, looked more realistic due to their higher resolution) and the “real” parts, which previously appeared low resolution.
Intel’s smart glasses called Vaunt, looks like normal glasses. And unlike Google Glasses, Vaunt doesn’t go overboard on projecting information on the miniature display, rather it shows minimal content, though contextual.
For instance, it shows directions to your destination if you’re walking on an unknown street or recipe steps if you’re cooking a new dish. From the outside, Intel’s new Vaunt glasses look just like eyeglasses. When you’re wearing them, you see a stream of information on what looks like a screen — but it’s actually being projected onto your retina. On the right stem of the glasses sits a suite of electronics designed to power a very low-powered laser. That laser shines a red, monochrome image somewhere in the neighborhood of 400 x 150 pixels onto a holographic reflector on the glasses’ right lens.
The image is then reflected into the back of your eyeball, directly onto the retina. The left stem also houses electronics, so the glasses are equally weighted on both sides. By connecting through Bluetooth, it can show you simple messages like directions or notifications. Because it’s directly shining on the back of your retina, the image it creates is always in focus. It also means that the display works equally well on prescription glasses as it does on non-prescription lenses.
Hasbro’s first augmented reality toy, themed after Marvel superheroes, lets you suit up like Tony Stark with a real-life mask and your phone, helping kids to live out their Iron Man fantasies.
The Marvel Avengers: Infinity War Hero Vision Iron Man AR Experience lets kids or role-playing adults become Iron Man by first putting an iPhone or Android phone inside a pair of goggles that go inside a mask.The Iron Man likeness comes from the toy’s included mask and gauntlet (the hand piece).
Kids are basically wearing a virtual reality (VR) headset on their faces with a phone strapped in. But instead of an immersive VR world, the app takes advantage of the phone’s camera and creates an AR overlay of enemies and bases around players. The goal is to take down all enemies and, ultimately, Thanos. Kids target enemies by raising their hand (and thereby the gauntlet) and pointing it at whatever they want to destroy. Now every child can be a superhero.
Yamaha builds AI that turns dance moves into sounds. They claim to involve an AI into this process. A famous dancer is using his entire body to play piano. They managed to get Kaiji Moriyama to be the dancer and had the Berlin Philharmonic Orchestra Scharoun Ensemble as musical support.
The mix between choreography and AI was developed by Yamaha Corporation. To find new and hybrid forms of artistic expressions. Here’s how it worked. The artificial intelligence (AI) system incorporates 4 types of sensors which were attached to Kaiji Moriyama’s body. The sensors captures Moriyama’s movements and the AI associated each motion with sound data. This created a melody sent to a Yamaha Disklavier piano which translated the data into piano keystrokes turning dancer into an instrument.
Motoichi Tamura, General Manager of Research & Development Division from Yamaha Corporation, wanted to conclude, “At Yamaha, we believe that AI will become a bridge between human beings and musical instruments. We are continuing our AI development activities to enable human beings to have freer and more direct expression through musical instruments.”