How "big data" gives insurers a giant edge over consumers
- Instagramming pics of yourself partying hard in Cancun? Insurance companies tap troves of social media and other digital data to make coverage decisions on home, auto, life and health insurance
- Insurers want to know about the media you consume, lawsuits you're involved in, credit record, education, employment and much more to help determine risks of underwriting policies
- Some insurers have used data to assess how much they can raise premiums before you start shopping around -- a tactic called "price optimization"
Global insurance company Gen Re found a new way to say, "Be my Valentine" last Feb. 14. Simply upload a selfie from your smartphone and the company could offer you a life insurance policy within minutes. "Facial analytics … brings a more human focus to the onboarding process," said the insurer in a promotional press release.
Appealing, yes. But appalling to many consumer advocates and some state regulators, who question how much an insurer needs to know about you and when that knowledge becomes intrusive.
Facial features are just one method insurers use to decide who to cover, at what price and, conversely, who not to cover. The risk is that faces that look older, weightier or unhealthy will have a harder time buying insurance from Gen Re, said professor Peter Kochenburger of the University of Connecticut Law School. It's quite possible also that this technology will discriminate against people of color, he added. Gen Re didn't respond to requests for comment.
Facial analysis is only one of the hundreds, perhaps thousands, of data points that insurers -- and the third-party vendors that supply so-called big data -- use in their analysis. An even bigger one -- the ability to "analyze your digital footprint" online.
"Companies are finding new ways to measure your risk," said Alyssa Connolly, a director at car insurance marketplace the Zebra, "and a lot has to do with how you engage online." For example, the type of device used, such as a laptop, tablet or smartphone, as well as the time of day it's in use.
Grilling Zuckerberg
It also has to do with who you converse with online and perhaps even what you say. Last month, Frank Pallone, D-N.J., chairman of House Energy and Commerce Committee, asked Facebook CEO Mark Zuckerberg to brief the committee on whether the social network had been "misleading its users regarding the private or anonymous nature of closed Facebook groups."
According to a complaint filed with the Federal Trade Commission, Facebook's well-known algorithms used personal information collected from its users to solicit members of online support groups to divulge their medical conditions. This information wound up with companies -- including insurers -- that shouldn't have had it, which garnered lists of these groups' members to decide whether to offer them coverage, Pallone's letter said.
Facebook exposed deeply personal health information, including substance disorder abuse, HIV status, transgender parenting and past history of sexual assault from people who thought their online conversations would be private, according to Pallone's letter.
Facebook didn't deny this information was being released. "Facebook is not an anonymous platform; real-name identity is at the center of the experience and always has been," said spokesperson Andy Stone. "It's intentionally clear to people that when they join any group on Facebook, other members of that group can see that they are a part of that community, and can see the posts they choose to share with that community."
Getting to know you, very well
It's no secret that insurers endeavor to find out as much as possible about potential applicants before offering auto, health, home or life insurance. And they use a lot of variables, such as their lifestyle choices, magazines they read, where they reside, pending or resolved criminal or civil lawsuits, education level and employment.
In some instances, information access is restricted. Health insurers can no longer refuse coverage on the basis of preexisting conditions. In California, driving records and mileage are the primary criteria for auto insurance.
Elsewhere, however, insurers can access your credit rating and use it, particularly when deciding whether to offer auto insurance. Consumer groups like the Consumer Federation of America say that's unfair. But at least consumers can access their credit ratings and ask the credit bureaus to remove inaccurate information.
However, there's no way of knowing what's in a big data file. "These models could have 1,000 factors and are very sophisticated," said Birny Birnbaum, director of the Center for Economic Justice (CEJ). "So we can't tell if they're fair."
University of Connecticut's Kochenburger said what's even worse is that some of these data points may be proxies for factors insurers shouldn't use to make decisions, like race and economic status. One example: your ZIP code. Will you get a better home insurance policy if you live in Beverly Hills rather than downtown Los Angeles? And what will you pay?
"Lifestyle indicators"
In January, the New York Department of Financial Services began investigating insurers' underwriting guidelines and practices after seeing how the use of big data had spread to the life insurance market. The Empire State is trying to determine if data or information sources unrelated to an applicant's medical condition are being treated as "lifestyle indicators." This could ultimately be used to deny coverage or increase its cost. Indicators could include race, sexual orientation or past travel.
Insurers often succeed in fending off efforts to find out about their underwriting standards by claiming they have the right to keep this information private from competitors. Since insurers obtain much of the information and algorithms used to generate big data from third-party vendors, it's even harder for state regulators to ferret it out. In most instances they don't even try.
The nation's standard-setting organization for insurance regulators, the National Association of Insurance Commissioners (NAIC), has had a committee investigating how insurers use big data for five years, but it hasn't taken "a single action," said the CEJ's Birnbaum. Lobbyist Joel Wood of the Council of Insurance Agents & Brokers once remarked that NAIC stood for "No Action Is Contemplated."
"There's a growing gap between the industry's use of big data and regulators' ability to supervise it," Kochenburger said.
Price-hiking strategy
Due to the secrecy surrounding the ways insurers use big data, consumer advocates find it difficult to point to specific industry indiscretions. But leaks do sprout -- often when third-party vendors tout how their product can help insurers.
One such case involved "price optimization." This helps insurers determine how much they can raise an insured's rates before the person starts shopping around. Several states caught on to this practice and have banned it.
Equally insidious is "claims optimization." This allows insurers to figure out who will accept a low claim offer and, conversely, who will hold out and perhaps even take the insurer to court for a better settlement.
Insurance companies and their vendors are careful to never say they do this because it would "violate every state's insurance laws," said Kochenburger. But he does point out that many insurers have claims programs that set standard damage amounts and are "difficult to override."