Eighteen months ago, Arturo Bejar and some colleagues at Facebook were reviewing photos on the site that users had flagged as inappropriate. They were surprised by the offending content—because it seemed so benign.

“People hugging each other, smiling for the camera, people making goofy faces—I mean, you could look at the photographs and you couldn’t tell at all that there was something that would make somebody upset,” says Bejar, a director of engineering at the social networking site.

Arturo Bejar, a Facebook engineer who has been leading its “social reporting” project, speaking at Facebook’s second Compassion Research Day on July 11. Arturo Bejar, a Facebook engineer who has been leading its "social reporting" project, speaking at Facebook's second Compassion Research Day on July 11. © Jeffrey Gerson/Facebook

Then, while studying a photo, one of his colleagues realized something: The person who reported the photo was actually in the photo, and the person who posted the photo was their friend on Facebook.

Advertisement X

As the team scrolled through other images, they noticed that was true in the vast majority of cases: Most of the issues involved people who knew each other but apparently didn’t know how to resolve a problem between themselves.

Someone would be bothered by a photo of an ex-boyfriend or ex-girlfriend, for instance, or would be upset because they didn’t appear in a photo that ostensibly showed a friend’s “besties.” Often people didn’t like that their kids were in a photo a relative had uploaded. And sometimes they just didn’t like the way they looked.

Facebook didn’t have foolproof ways to identify or analyze these problems, let alone resolve them. And that made Bejar and his colleagues feel like they weren’t adequately serving the Facebook community—a concern amplified by the site’s exponential growth and worries about cyberbullying among its youngest users.

“When you want to support a community of a billion people,” says Bejar, “you want to make sure that those connections over time are good and positive and real.”

A daunting mission, but it’s one that Bejar has been leading at Facebook, in collaboration with a team of researchers from Yale University and UC Berkeley, including scientists from the Greater Good Science Center. Together, they’re drawing on insights from neuroscience and psychology to try to make Facebook feel like a safer, more benevolent place for adults and kids alike—and even help users resolve conflicts online that they haven’t been able to tackle offline.

“Essentially, the problem is that Facebook, just like any other social context in everyday life, is a place where people can have conflict,” says Paul Piff, a postdoctoral psychology researcher at UC Berkeley who is working on the project, “and we want to build tools to enable people who use Facebook to interact with each other in a kinder, more compassionate way.”

Facebook as relationship counselor

For users troubled by a photo, Facebook provides the option to click a Report link, which takes them through a sequence of screens where they can elaborate on the problem, called the “reporting flow.”

Up until a few months ago, the flow presented all “reporters” with the same options for resolving the problem, regardless of what that problem actually was; those resolutions included unfriending the user or blocking them from ever making contact again on Facebook.

“One thing that we learned is that if you give someone the tool to block, that’s actually not in many cases the right solution because that ends the conversation and doesn’t necessarily resolve anything—you just sort of turn a blind eye to it,” says Jacob Brill, a product manager on Facebook’s Site Integrity and Support Engineering team, which tries to fix problems users are experiencing on the site, from account fraud to offensive content.

Jacob Brill, a Facebook product manager, presenting some of his social reporting team’s preliminary findings at the second Compassion Research Day. Jacob Brill, a Facebook product manager, presenting some of his social reporting team's preliminary findings at the second Compassion Research Day. © Jeffrey Gerson/Facebook

Instead, Brill’s team concluded that a better option would be to facilitate conversations between a person reporting content and the user who uploaded the content, a system that they call “social reporting.”

“I really think that was key—that the best way to resolve conflict on Facebook is not to have Facebook step in, but to give people tools to actually problem-solve themselves,” says Piff. “It’s like going to a relationship counselor to resolve relationship conflict: Relationship counselors are there to give couples tools to resolve conflict with each other.”

To help Facebook develop those tools, Bejar turned to Piff and two of his UC Berkeley colleagues, social psychologist Dacher Keltner and neuroscientist Emiliana Simon-Thomas—the GGSC’s faculty director and science director, respectively—all of whom are experts in the psychology of emotion.

“It felt like we could sharpen their communication,” says Keltner, “just to make it smarter emotionally, giving kids and adults sharper language to report on the complexities of what they were feeling.”

The old reporting flow wasn’t very emotionally intelligent. When first identifying the problem to Facebook, users had some basic options: They could select “I don’t like this photo of me,” claim that the photo was harassing them or a friend, or say that it violated one of the site’s Community Standards—for hate speech or drug use or violence or some other offense. Then they could unfriend or block the other user, or send that user a message.

Initially, users had to craft that message themselves, and only 20 percent of them actually sent a message. To boost that rate, Facebook provided some generic default text—“Hey I don’t like this photo. Please remove it.”—which raised the send rate to 51 percent. But often users would send one of these messages and never hear back, and the photo wouldn’t get deleted.

Bejar, Brill, and others at Facebook thought they could do better. The Berkeley research team believed this flow was missing an important step: the opportunity for users to identify and convey their emotions. That would guard against the fact that it’s easier for people online to be insensitive or even oblivious to how their actions affect others.

“If you get someone to express more productively how they’re feeling, that’s going to allow someone else to better understand those feelings, and try to address their needs,” says Piff. “There are some very simple things we can do to give rise to more productive interpersonal interactions.”

© Courtesy of Facebook

Instead of simply having users click “I don’t like this photo,” for instance, the team decided to prompt users with the sentence “I don’t like this photo because:”, which they could complete with emotion-laden phrases, such as “It’s embarrassing” or “It makes me sad” (see screenshot at left). People reporting photos selected one of these options 78 percent of the time, suggesting that the list of phrases effectively captured what they were feeling.

People were then taken to a screen telling them that the best way to remove the photo was to ask the other user to take it down—blocking or unfriending were no longer presented as options—and they were given more emotionally intelligent text for a message they could send through Facebook, tailored to the particular situation.

The emotionally intelligent message UC Berkeley researchers developed for Facebook “reporters.” (User’s name has been erased.) The emotionally intelligent message UC Berkeley researchers developed for Facebook "reporters." (User's name has been erased.) © Courtesy of Facebook

That text included the other person’s name, asked them more politely to remove the content (“would you please take it down?” vs. the old “please remove it”), and specified why the user didn’t like the photo, emphasizing their emotional reaction and point of view—but still keeping a light touch. For example, photos that made someone embarrassed are described as “a little embarrassing to me.” (See the screenshot at left for an example.)

It worked. Roughly 75 to 80 percent of people in the new, emotionally intelligent flow sent these default messages without revising or editing the text, a 50 percent increase from the number who sent the old, impersonal message.

When Keltner and his team presented these findings at Facebook’s second Compassion Research Day, a public event held on Facebook’s campus earlier this month, he emphasized that what mattered wasn’t just that more users were sending messages but that they were enjoying a more positive overall experience.

“There are a lot of data that show when I feel stressed out, mortified, or embarrassed by something happening on Facebook, that activates old parts of the brain, like the amygdala,” Keltner told the crowd. “And the minute I put that into words, in precise terms, the prefrontal cortex takes over and quiets the stress-related physiology.”

Preliminary data seem to back this up. Among the users who sent a message through this new flow, roughly half said they felt positively about the other person (called the “content creator”) after they sent them the message; less than 20 percent said they felt negatively. (The team is still collecting and analyzing data on how users feel before they send the messages, and on how positively they felt after sending a message through the old flow.)

In this new social reporting system, half of content creators deleted the offending photo after they received the request to remove it, whereas only a third deleted the photo under the old system. Perhaps more importantly, roughly 75 percent of the content creators replied to the messages they received, using new default text that the researchers crafted for them. That’s a nearly 50 percent increase from the number who replied to the old messages.

“The right resolution isn’t necessarily for the photo to be taken down if in fact it’s really important to the person who uploaded it,” says Brill. “What’s really important is that you guys are talking about that, and that there is a dialogue going back and forth.”

This post is a problem

That’s all well and good for Facebook’s adult users, but kids on Facebook often need more. For them, Facebook’s hazards include cyberbullying from peers and manipulation by adult predators. Rough estimates indicate that more than half of kids have had someone say mean or hurtful things to them online.

Previously, if kids felt hurt or threatened by someone on Facebook, they could click the same Report link adults saw, which took them through a similar flow, asking if they or friends were being “harassed.” From there, Facebook gave them the option to block or unfriend that person and send them a message, while also suggesting that they contact an adult who could help.

Yale developmental psychologist Marc Brackett discussing his collaboration with Facebook at the second Compassion Research Day. Yale developmental psychologist Marc Brackett discussing his collaboration with Facebook at the second Compassion Research Day. © Jeffrey Gerson/Facebook

But after hearing Yale developmental psychologist Marc Brackett speak at the first Compassion Research Day in December of 2011, Bejar and his colleagues realized that the old flows failed to acknowledge the particular emotions that these kids were experiencing. That oversight might have made the kids less likely to engage in the reporting process and contact a supportive adult for guidance.

“The way you really address this,” Bejar said at the second Compassion Research Day, “is not by taking a piece of content away and slapping somebody’s hand, but by creating an environment in which children feel supported.”

To do that, he enlisted Brackett and two of his colleagues, Robin Stern and Andres Richner. The research team organized focus groups with 13-to-14-year-old kids, the youngest age officially allowed on Facebook, and interviewed kids who’d experienced cyberbullying. The team wanted to create tools that were developmentally appropriate to different age ranges, and they decided to target this youngest group first, then work their way up.

From talking with these adolescents, they pinpointed some of the problems with the language Facebook was using. For instance, says Brackett, some kids thought that clicking “Report” meant that the police would be called, and many didn’t feel that “harassed” accurately described what they had been experiencing.

Instead, Brackett and his team replaced “Report” with language that felt more informal: “This post is a problem.”

They tried to apply similar changes across the board, refining language to make it more age-appropriate. Instead of simply asking kids whether they felt harassed, they enabled kids to choose among far more nuanced reasons for reporting content, including that someone “said mean things to me or about me” or “threatened to hurt me” or “posted something that I just don’t like.” They also asked kids to identify how the content made them feel, selecting from a list of options.

Depending on the problem they identified, the new flows gave kids more customized options for the action they could take in response. That included messages they could send to the other person, or to a trusted adult, that featured more emotionally rich and specific language, tailored to the type of situation they were reporting.

“We wanted to make sure that they didn’t feel isolated and alone—that they would receive support in a way that would help them reach out to adults who could provide them with the help that they needed,” Brackett said when presenting his team’s work at the second Compassion Research Day.

After testing these new flows over two months, the team made some noteworthy discoveries. One surprise was that, when kids reported problems that they were experiencing themselves, 53 percent of those problems concerned posts that they “didn’t like,” whereas only three percent of the posts were seen as threatening.

“The big takeaway here is that … a lot of the cases are interpersonal conflicts that are really best resolved either between people or with a trusted adult just giving you a couple of pointers,” Jacob Brill said at the recent Compassion Research Day. “So we’re giving [kids] the language and the resources to help with a situation.”

And those resources do seem to be working: 43 percent of kids who used these new flows reached out to a trusted adult when reporting a problem, whereas only 19 percent did so with the old flows.

“The new experience that we’re providing is empowering kids to reach out to someone they trust to get the help that they need,” says Brackett. “There’s nothing more gratifying than being able to help the most amount of kids in the quickest way possible.”

Social reporting 2.0

Everyone involved in the project stresses that it’s still in its very early stages. So far, it has only targeted English-language Facebook users in the United States. Brackett’s team’s work has only focused on 13 to 14 year olds, and the new flows developed by the Berkeley team were only piloted on 50 percent of Facebook users, randomly selected.

The GGSC’s Emiliana Simon-Thomas discusses her work with Facebook at the second compassion research day. The GGSC's Emiliana Simon-Thomas discusses her work with Facebook at the second compassion research day. © Jeffrey Gerson/Facebook

Can they build a more emotionally intelligent form of social reporting that works for different cultures and age groups?

“Our mission at Facebook is to do just that,” says Brill. “We will continue to figure out how to make this work for anyone who has experiences on Facebook.”

The teams are already working to improve upon the results they presented at the second Compassion Research Day. Brackett says he believes they can encourage even more kids on Facebook to reach out to trusted adults, and he’s eager to start helping older age groups. He’s excited by the potential impact.

“When we do our work in schools, it’s one district, one school, one classroom at a time,” he says. “Here, we have the opportunity to reach tens of thousands of kids.”

And that reach carries exciting scientific implications for the researchers.

“We’re going to be the ones who get to go in and have 500,000 data points,” says Simon-Thomas. “It’s beyond imagination for a research lab to get that kind of data, and it really taps into the questions we’re interested in: How does conveying your emotion influence social dynamics in rich and interesting ways? Does it facilitate cooperation and understanding?”

And what’s in it for Facebook?

Bejar, the father of two young children, says that protecting kids and strengthening connections between Facebook users makes the site more self-sustaining in the long run. The project will have succeeded, he says, if it encourages more users simply to think twice before posting a photo that might embarrass a friend or to notify that friend when they post a questionable image.

“It’s those kinds of kind, compassionate interactions,” he says, “that help build a sustainable community.”

GreaterGood Tiny Logo Greater Good wants to know: Do you think this article will influence your opinions or behavior?
You May Also Enjoy
Comments

Humans studying humans…

Life | 10:03 am, July 25, 2012 | Link

 

I don’t think you can MAKE anyone do anything.  Then
again, procedures could be put into place to eliminate
those who cause the most trouble.  It also needs to
be easier to report people causing trouble or leaving
threatening messages.

Keith | 6:28 pm, July 25, 2012 | Link

 

Just a little bickering or misunderstanding, I think
would warrant a little dialogue, maybe discussion of
the issues between the parties involved. That would
be a good thing. On the other hand, there comes a
time when the “issues” become something that has
gone to another level (if you will) that even
unfriending is out of the question. In that case ALL
ties & communications should be blocked.
implementing the blocking option. It would all depend
on what degree the party who’s feelings was imposed
upon felt.

Irma Robinson | 1:11 pm, July 26, 2012 | Link

 

I don’t think one can influence this. When someone logs in to fb, what they do/say reflects on how their day has been.

Praveen Kumar | 8:25 am, July 29, 2012 | Link

 

I think everyone should just get off Facebook.

scorpio6 | 10:49 am, July 29, 2012 | Link

 

I’ll be very interested in your cross-cultural findings
as I live in NZ and I’m of Maori descent. Maori are
hugely social and connected, and those in my
Facebook network have an average of 500-700
friends - much higher than average. By and large
they seem to resolve most of the issues your
article discusses directly and painlessly.

So I’m wondering - is this mostly an issue of
connectedness and personal interaction on-line and
not something specific to Facebook? And are there
lessons for all the other interactions people have
with each other?

Alan Armstrong | 1:18 pm, July 30, 2012 | Link

 

i dont think that just requesting a person to remove the comment or photo can help much because its not certain that he will accept the request for he is an unknown person and dont cares any feelings .
i think that if at first request he does not act as we like thre must be the option of blocking at the second complaint

Swati Gupta | 1:44 pm, July 30, 2012 | Link

 

The answer, my dears, is no. They can only improve, not perfect. The fact of the matter is that we are us. Some people like harrassing others. Sometimes there is an honest mistake. Sometimes people are just cranky/have to poop. Either way, nothing will change. Maybe these scientists should devote their time to something more meaningful…

Dylan Mahone | 7:18 pm, July 30, 2012 | Link

 

Research done by GGSC is very important; knowing how better to communicate will improve our lives in many ways. But I wonder about this approach.

I would never, ever photograph someone then post it online without their permission. Please tell me I’m not the only one who sees how disgusting that is.

When I have a day out with my friends, I might take photos. But I don’t even quote my friends (or any human being) in my posts without asking them first, much less use an image of them or their property without a very clear conversation to make sure it’s okay.

I’ve seen people post (on Wordpress) photos of their friends drunk and passed out, or photos of other people’s kids. Really?

If someone posts a photo without asking, they’re thoughtless. If they refuse to take it down when you ask nicely, they’re a sociopath.

Amelie | 3:36 pm, July 31, 2012 | Link

 

There is huge potential for misunderstanding and miscommunication. Communication is a fraught and complex thing at the best of times.
When people normally communicate, checks and balances are in play that try to keep communication civilised. These are bearing the responsibility for what you say and write, as you are in the vicinity of the person you communicate with.

When it’s done on social networking sites, these normal checks and balances that restrain human interest do not apply any more. Social networking interaction is devoid of the usual level of morality, consideration and compassion normal human interaction demands.

Therefore minimum use of social networking results in increased levels quality in our communication.

What I’m saying is that there is a direct correlation between the use of social networking and the quality of our communication. Therefore to expose yourself to the least possible chance of miscommunication, use social networking as little as possible. Or as in my case, not at all.

The sense of freedom and liberty is remarkable.

Stephen | 10:24 pm, August 2, 2012 | Link

 

I don’t agree with Stephen unless he’s talking about
total strangers:

“When it’s done on social networking sites, these
normal checks and balances that restrain human
interest do not apply any more. Social networking
interaction is devoid of the usual level of morality,
consideration and compassion normal human
interaction demands.”

That doesn’t happen with my 500+ Facebook
friends because we are part of an organization
with a common goal who interact in other ways
too. I have very high quality communication with
them and intend to keep doing that.

Alan Armstrong | 12:56 pm, August 3, 2012 | Link

 

Well-meant research I’m sure, but it seems to me
that it fails to address (perhaps is incapable of
addressing) the most basic problem. That of
anomynity.
Just as long as immature, spiteful and essentially
cowardly people can hide behind pseudonyms they’ll
continue to post inappropriate stuff, use foul
language, and bully.
It seems to me that the only way forward is for
Facebook itself to accept the responsibility for
naming and shaming the worst offenders.
I am of a generation which often finds the modern
interpretation of ‘bullying’  over-sensitive, and
examples here of people being upset by having
their photographs posted ‘inappropriately’ seem to
be a case in point. Nonetheless serious bullying,
seriously vile language etc should result in the
culprit being publically held to account.
Remember the old adage that all bullies are
cowards? If they couldn’t hide behind their FB
anonymity they almost surely would not cause
problems.
Facebook apparently does not not want to be
involved in ‘Censorship’. Fair enough—but naming
and shaming the more blatent offenders does not
fall within that category. It is a public service.

Redvers A King | 3:34 am, August 17, 2012 | Link

 

Redvers,

Good point—I raised this issue to the team at
Facebook. They pointed out that what
distinguishes Facebook from other online forums
is that users can’t really hide behind pseudonyms.
Unlike other sites where you can leave a comment
anonymously (like, say, our site here at Greater
Good) for all to see, your ability to comment and
reach people with your posts on Facebook is mostly
contingent on your number of “friends”—and, in
turn, your friend requests are mostly accepted by
people who recognize your name and photo, or see
how you’re connected through a mutual friend. So
when you post, you’re using your own name, and
your messages are immediately visible to friends,
family, your grandmother (if she’s on Facebook, as
mine is).

Research does suggest that our behavior is less inhibited online than in real life,
partly because we lack the cues of face-to-face
communication that tell us how our actions affect
others emotionally. But to a certain extent, it does
seem like this “online disinhibition effect” is
mitigated on Facebook by the fact that we’re far
less likely to be posting anonymously or hiding
behind a pseudonym.

Jason Marsh | 10:31 am, August 17, 2012 | Link

 

I have found in most circumstances people who know each other can work issues out.  Yes, your findings may help some who need help in communicating and working things out as adults. However, given the fact that about 50% (I would guess) of FB accounts are fake, all the research in the world is not going to help. 

Yes, I know FB says they does not allow fake accounts, they don’t allow those under 13 years old but they have yet to stop it and it has been easy for me to find a lot of fake accounts and those as young as 9 years old boasting of online sexual relations.

So, all your research is based on account use that goes by the rules which is not where the issues lies for the most part. The problem arises from those with fake accounts.

The fake accounts have their model they use as a picture and the fake bio they create to hide behind along with a fake name on the email they gave to start the account. I have no doubt the majority of the reporting, hate posting, bullying, etc. comes from those fake accounts.

If I can find hundreds upon hundreds of these fake accounts (I don’t mean business accounts or organizations I mean role players and those who hide behind fake everything) I find it hard that FB cannot but, if they want help, contact me.

diane | 3:20 am, September 5, 2012 | Link

 

i dont think that this will solve the issue. Because the
person who post the photos may be x lover or a
stranger. In such situations this will not work out. I m
facing the problem because of my x boy friend. I m
not in facebook. But he has created account in my
name and sent request to all my friends.And he has
posted photos which are very bad. I gave report photo
but itwas not removed.  I suggest you to remove and
block his or her entire profile and dont let tjem to use
facebook from then onwards. I have one question…
will the facebook team keep the photos that is shared
by the user even after he or she removes it from there
profile? pls i want ans for this

sowmya | 10:51 am, September 5, 2012 | Link

 

I’m glad that Facebook is taking steps to become
more emotionally intelligent. That was always my
biggest issue with it. I like the idea of a community
where people could freely communicate and be
themselves. I think facebook has noble motives, but
they have a long way to go. This is the kind of
refinement that makes me think that they are
working to earn the praise and fanfare that they had
achieved so prematurely. I’m proud to say that i am a
user of the site and this is one reason more.

andrew c | 11:07 am, October 29, 2012 | Link

 

diane | 3:20 am, September 5, 2012

I totally agree with you Diane.  The study is far
from being accurate when there are thousands of
fake profiles.

Problem is that there are large groups of anti-
social (with Sociopathic characteristics) members
that find it very funny to make very hateful (their
free speech argument is invalid) comments
and/pages and feel that there are no
consequences for their behaviors.

Facebook has a moral obligation to monitor their
site to ensure safety for all of their users.

I find it hard to believe that the Founder, CEO, and
shareholders, and upper management staff would
condone pages (some involve child porn/porn,
hate pages for deceased individuals, and also
people with disabilities) to be allowed.

Kat O'Malley | 6:52 am, November 3, 2012 | Link

 

The remedy is in the users’ hands, Kat. I help run a
rather large Facebook network for a reform
movement. Because our members have a common
goal they are strongly supportive of each other,
and we’ve learned how to use Facebook’s privacy
settings to filter out the sociopaths and
scammers. The media say they’re there but we
never see them.

We still get hate mail, but our numbers mean we
can deal with it ourselves. Most people are
daunted when several hundred strangers
simultaneously pour scorn on them, and if that
fails we have laws about such things as inciting to
commit a crime. I recently used that very
effectively when someone set up a page urging the
assassination of a prominent person. I only had to
send them a private message saying take the page
down or I’ll ask the Police to investigate you, and I
got instant compliance. Our Government has also
twice stepped in to ask Facebook to remove
offensive pages when our own efforts failed. I
don’t know what country you’re in, but most
Governments are supportive of human rights.

Also remember very few haters have any real
influence. One recent page that attracted
contrywide attention here turned out to be a
couple of silly teenagers in a small rural town who
were easily dealt with.

Alan | 1:14 pm, November 3, 2012 | Link

 

Emotional intelligence, it seems, can be taught!

Facebook is doing just this; by guiding the communications between “content creators” and “content reporters”, this experimental approach is helping users to be more eloquent in articulating their problems with specific content.

interestingly, these communications improve when they become more specific, when a reporter tells a creator exactly why they don’t like the content, or how the posting of that content makes them feel. 

When faced with a problem relating to another person, we’re often tempted to downplay it, to say as little as possible, perhaps on the theory of “Least said, soonest mended”.  Yet - to oppose one conventional piece of wisdom with another - honesty _is_ usually the best policy.  This more open approach, exemplified by Facebook’s emotionally smarter dialogues, doesn’t avoid conflict; instead, it offers the content creator an opportunity to understand the effects their posting has on the reporter - in short, to empathize with them and to show compassion.

So I, for one, am very pleased that Facebook is devoting considerable thought, skill and effort towards making our many, many online “intentional” communities there better, more caring and more compassionate “places” to live in.  They might be only virtual places, but the emotions they excite are very real.

Thank you for this article - an intriguing example showing just how even an industrial giant can put compassion into action!

Yahya Abdal-Aziz | 6:16 am, November 9, 2012 | Link

 

Alan, in my dealings (and also from several
thousand Facebook users) with the moderators of
these hate pages I have found a lack of social courtesy,
rudeness, and also threats.

These hate pages, are for the most part, run by
kids (who by the way brag online about how many
fake accounts that they have) and because of their
anonymity feel that they can just do anything they
want.  They go as far as stealing personal pictures
of users, creating pages to mock and ridicule these
people.  When we report to Facebook the breach
of their Community Standards, and also the TOS, 
we are often ignored and in doing so I get the
feeling we’re laughed at.  This is no laughing
matter.  Just yesterday many of the users from
our group commented that the reviews came back
within seconds of filing their complaint on specific
pictures.  Why would that happen?

Facebook needs to uphold to THEIR Community
Standards.  By allowing child porn/porn/hate
pages/children under the age of 13 to have
accounts, people that have several accounts, they
are creating an environment that is not “family
friendly”.

Just recently our groups have been trying to get
Facebook to remove a picture where a woman,
who was unconscious and laying on the ground,
was being sexually assaulted by a male. 
Facebook’s response was that it didn’t violate their
Community Standards.  And, how is this picture not
violating their Community Standards.  We tried
asking the moderator to take the picture down. 
The moderator refused.

Kat O'Malley | 2:16 pm, November 29, 2012 | Link

 

Why are you dealing with the moderators of hate
pages, Kat? That’s a fool’s game. I just get my
networks to block them and we carry on as usual.

Alan | 7:05 pm, November 29, 2012 | Link

 

Alan, because since Facebook is offering this
so-called “user friendly reporting system” I had
tried to amicably resolve this issue with the
moderators.  There is NO reasoning or
rationalizing with these people.

“Preliminary data seem to back this up. Among
the users who sent a message through this new
flow, roughly half said they felt positively
about the other person (called the “content
creator”) after they sent him or her the
message; less than 20 percent said they felt
negatively. (The team is still collecting and
analyzing data on how users feel before they
send the messages, and on how positively they
felt after sending a message through the old
flow.)
In this new social reporting system, half of
content creators deleted the offending photo
after they received the request to remove it,
whereas only a third deleted the photo under
the old system. Perhaps more importantly,
roughly 75 percent of the content creators
replied to the messages they received, using
new default text that the researchers crafted
for them. That’s a nearly 50 percent increase
from the number who replied to the old
messages.”

The above is so inaccurate that it’s laughable. 
Based upon the users that I’ve discussed this
issue with, we would say at least 90% of the
time there is NO positive dialogue with the
page moderators.

Kat O'Malley | 11:40 am, November 30, 2012 | Link

 
blog comments powered by Disqus