Technology

Artificial intelligence finds the perfect babysitter for your baby

A new artificial intelligence system called Predictim helps parents who are looking for a trustworthy confidant to find the person with the appropriate standards, saving them from doing their own searching and the risk of their intuition deceiving them.

The “smart” software is based primarily on the thousands of social media subscribers (Facebook, Instagram, Twitter, etc.) who have been analyzing over a long period of time to create a “profile” and identify candidate with a suspicious past like drug abuse, intimidation and interference with the police, but also with poor mental mood, lack of politeness, and generally a “negative” spirit.

Predictim Online Service psychometrically assesses the personality of each prospective babysitter and then automatically offers a “score” to the prospective parents. For example, to advise that such babysitter has a very low risk of hiding drug use while the child is asleep but has a moderate risk of being abused by her behavior.

The system does not explain – as is often the case with artificial intelligence – how and why it makes its decisions, so parents are called upon to trust it more than the recommendations of their acquaintances about a babysitter or their own intuition from personal interview with her.

Predictim belongs to the broader and increasingly widely used category of “smart” invasive technology that helps companies and organizations evaluate candidates for recruitment by analyzing – beyond his / her CV – the candidate’s voice, the expressions of his person and the its online history so that it reveals any dark aspects of its personality and privacy.

Supporters of these systems are showing the greatest objectivity and transparency that they ensure, but their critics think they are making automated decisions that change people’s lives without being able to check their credibility and whether they are biased or wrong. In short, they emphasize that artificial intelligence is promoting a modern “believed and not exploring …”.

This is because Predictim’s algorithms and related systems are a “black box” in terms of their function, even for the authors themselves of these algorithms, who are unable to tell why the system made that decision and not the other.

For example, who guarantees that Predictim does not reflect the author’s “often unconscious” personal preconceptions about how to talk, dress and generally behave on a babysitter? Indeed, if a babysitter or other prospective employee does not agree to be judged by an artificial intelligence system, then he automatically suspects that something is hidden and disadvantaged in relation to other candidates.

Salvador, co-founder and chief executive of Predictim, founded last month by the University of California-Berkeley as part of SkyDeck’s new technology incubator, replies that the system is making ethical decisions, and insists that the risk of a violent or problematic babysitter is severely reduced thanks to the artificial intelligence that comes as a helping hand to parents.

The Predictim system costs parents from $ 25. Candidate babysitter has to give her name, e-mail, and agree in writing that she allows broad access to her accounts in social media. She can of course refuse, but then the parents will be informed of her denial …

The system uses text, speech and image processing algorithms to “sweep” candidates’ accounts. The end result with the score is delivered confidentially only to the parent customers, who are not obliged to share with the babysitter the evaluation. The decision to recruit is ultimately their own, and if they want, they can overlook Predictim’s judgment (but from the time they paid, it’s probably hard to do that …).