Facebook ‘deliberately broke’ Android apps in user experiment

Jan 5, 2016 | Facebook marketing, Mobile, Regulation

After its controversial ‘sadness’ experiment back in 2014, Facebook is facing new criticism for playing with users’ feelings- this time by breaking its app on purpose. A report from tech journal The Information accuses the social network of deliberately disabling its Android app for selected users in order to monitor at what point they’d stop […]

After its controversial ‘sadness’ experiment back in 2014, Facebook is facing new criticism for playing with users’ feelings- this time by breaking its app on purpose.


broke%20apps.jpg
A report from tech journal The Information accuses the social network of deliberately disabling its Android app for selected users in order to monitor at what point they’d stop coming back.
The experiment indicated that despite the down time lasting considerable stretches for those that were part of the study, users repeatedly attempted to access the network, either through re-opening the app or eventually opening the mobile version of the site.
“The company wasn’t able to reach the threshold,” the site says, with someone familiar with the experiment adding that “people never stopped coming back”.
Even if the app was broken for hours on end, people simply used the mobile web version of the site, rather than not use Facebook, the study found.
The purpose for the experiments was for the company to develop a backup plan in the case that its competition with Google (over advertising and search) became hostile and its app was removed from the Google Play store.
While Facebook could make its app work without the Google Play store, it would also have to develop its own replacements for many of the services provided by Google Services, including the ability to provide automatic updates and in-app purchases.
The test only happened once, “several years ago”, but it reignites the controversy for the site around user testing.
Two years ago, Facebook suffered a large backlash after revealing that it had been experimenting on its users to study “emotional contagion”.
It eventually apologised for the psychological experiments, which involved deliberately increasing the positive or negative content visible on subjects’ newsfeeds and then attempting to discern whether doing so made their own postings happier or sadder.

Share This