An international team of scientists has created a non-invasive device that stimulates the brain to improve cognitive function. In tests on macaques, it reportedly increased the monkeys’ learning speed by 40 percent.
Some worry artificial intelligence will steal human jobs — but one startup is betting that its AI will actually help you get a job.
San Francisco-based Mya Systems has developed an AI recruiter that can evaluate resumes, schedule and conduct applicant screenings, and even congratulate you on your first day of work.
Greg Whitby: I had the pleasure of speaking at the 2017 Edutech conference in Sydney recently. The conference is a ‘finger on the pulse’ on what is happening in schooling and the trends that are shaping the educational landscape.
Interesting isn’t it that we live at a time where we have moved on from talking about trends and data to ‘mega-trends’ and ‘big-data’. Connectivity, scaleability and mobility have been massive game-changers in that we no longer see business dictating trends. Instead we have technology delivering greater power to clients, customers and learners.
The AImotive office is in a small converted house at the end of a quiet residential street in sunny Mountain View, spitting distance from Google’s headquarters. Outside is a branded Toyota Prius covered in cameras, one of three autonomous cars the Hungarian company is testing in the sleepy neighborhood. It’s a popular testing ground: one of Google’s driverless cars, now operating under spin-out company Waymo, zips past the office each lunchtime.
‘Benevolent bots’ or software robots designed to improve articles on Wikipedia sometimes have online ‘fights’ over content that can continue for years, say scientists who warn that artificial intelligence systems may behave more like humans than expected.
Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements.
When it comes to robot-human relations, the conversation typically centers on the welfare of the sentient. Science fiction paints us as petrified by our own creations; fears of a bot planet have influenced everything from Asimov’s “Laws of Robotics” to HAL 9000’s homicidal impulses to Skynet’s global genocide.
These human-centric anxieties are understandable. However, as our assorted bots and bits gain skills and personalities, should they be afforded some form of protection from us? It’s a question people are starting to seriously ponder.
When “little green men” invaded Crimea in early 2014, they left a data trail that went largely unnoticed by the U.S. Intelligence Community (IC). Distracted by a large Russian exercise to the west, the IC did not connect the digital dots that indicated the impending invasion. In the Information Age, the “dots” are more plentiful and glaring as everyone now leaves a data trail. Given that, how can intelligence analysts better gather, share, organize, and view data to reveal intent, more accurately predict behavior, and make better decisions with limited resources?