AI babysitting service Predictim vows to stay online after being blocked by Facebook and Twitter

NEW YORK — Predictim, an app that uses artificial intelligence to vet potential babysitters, has vowed to continue using social media data to improve its recommendation engine even after being blocked by Facebook and Twitter, said co-founder and CEO Sal Parsa.

"We will continue what we are doing now, helping parents make decisions as to who they want to trust," Parsa said in an interview with CBS News.

Parsa's company uses a form of AI called natural language processing to filter and rank search results gathered from a vast cache of information gathered from social media sites. Predictim's product is designed, Parsa said, to help parents and pet owners learn more about the personality traits of potential babysitters, dog walkers and other caretakers.

The company's technology generates a "risk score" based on social media activity that ranks a person's attitude and their likelihood of engaging in online harassment or abusing drugs. The site also flags questionable content, and automatically generates a report that explains what each ranking means.

"Our AI assesses an individual's online behavior in regards to bullying, harassment, sexual explicit content, and drug abuse," Parsa said.

Facebook, Instagram (which is owned by Facebook) and Twitter recently blocked Predictim for violating the tech firms' rules on data harvesting and user privacy.

"We strictly prohibit the use of Twitter data and APIs for surveillance purposes, including performing background checks," Twitter spokesperson Nick Pacilio said. "When we became aware of Predictim's services, we conducted an investigation and revoked their access to Twitter's public APIs." API is an acronym for application programming interface, the technology that allows computer applications exchange information.

Facebook did not respond to CBS News' request for comment, but recently told the BBC, "scraping people's information on Facebook is against our terms of service."

Pasrsa pushed back against the ban. "We do not use automated scrapers on Facebook so we are not violating their platform policy," he said. "We use publicly available data."

The Predictim app's use of public data has been controversial since the product launched. A number of users on Twitter noted that the company's results were inconsistent and expressed concern that in addition to social media information, Predictim might be relying on open source intelligence (OSINT) — commonly available datasets like academic journals, think tank studies, and traditional mass media reports.

Digital privacy specialist Sarah Clarke worried that using on OSINT data was too generic and created "murky, inherently error-prone" results.

"[The] potential for false positives and false negatives is huge, and accuracy relies on both requestor and target having more and more data disclosed," said Clarke.

AI experts are dubious about the company's opaque methodology and are concerned that the data sources it uses could lead to inaccurate, incomplete, or damaging results.

"A lot of datasets are biased, especially against women and minorities," said James Barrat, AI expert and author of the book "Our Final Invention." "Many datasets were hand-coded, sometimes decades ago, and have passed on the biases of their creators. For example, a prison sentencing algorithm in Florida was built using historical prison sentences. They were harsher for minorities."

Barrat was also skeptical about using AI to rank individuals based on social media information.

"What gives a prospective employer, not even an actual employer, the right to invade someone's privacy by demanding access to their social media accounts?" Barrat asked. "The processes behind these magical ranking algorithms are 'black-box' systems, meaning you can't look under the hood to see how the rankings are being made."

Predictim was also critiqued online for using scare tactics to target parents of young children in its promotional materials. Two documents linked on the Predictim site use images of children alongside anecdotes about how the company's "advanced artificial intelligence" might prevent tragedy.

One story, under the headline "Protecting your children from babysitting nightmares is possible, if you have the right tools," described a scenario in which a child is harmed by a criminal. "A mother needed to go out of town, so she simply leaned on a person she'd trusted her children with in the past. This time, however, the babysitter was out for pain," reads the Predictim scenario.

The rhetoric bothered many technology experts, including David Heinemeier Hansson, programmer and creator of popular coding language Ruby on Rails. "I think black-box AI that renders opaque verdicts that has substantial impact on people's lives is both bad, dangerous, misguided, and irresponsible," he said. "When we don't know what's in the algorithm, there's nothing to trust."

Heinemeier Hansson explained that babysitters could potentially be harmed by AI systems that rely on social media data. "I don't think it's possible to look at 22 Twitter posts and then declare with any reasonable certainty that the person who wrote those are prone to drug use or of poor social character," he said. "Their dignity is degraded by being subjected to the process" and "they may not get a job that they were eminently qualified for because the machine gave them a [bad] score."

But Parsa objected to recent criticism. "We stack our [technology] with other models to make the final output explainable and not a black-box," he said. "Every scan that is flagged for medium or high risk is human reviewed by an analyst trained on conscious and unconscious bias and we show the parents examples of why the AI flagged this person so they can make their own judgement."

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.