I was reading yet another piece on the data out that Facebook edge seems to favor live manual posts over on SociaMediaToday
My comment from them was rejected by the labyrinth of logins on that site (likely my fault), so I’m posting it here.
Their article and the other comments are good, but I think this best sums up what I think is the real issue
This study bothers me.
I believe that the extra work of logging in a a page at the right time of day, posting a link with a good graphic, editing the default title and description and adding though provoking questions to encourage discussion would be the key to success.
But what part of that overall program I just described is due to which software you used to post?
Could it be that the 3rd part posts do a lousy job of all of these factors. Has Facebook admitted that it’s giving a lower score just because the post was sent through the API?
I know when I’m reading a news feed, the style of the post, along with the content, make a huge different in what gets read, commented on and clicked. I’m biased by my personal experience, but in years of reading thousands of posts, I can’t recall using the source as a primary flag for quality EVER.
The differences between apps could easily be explained by the way they format posts to the API. I know that it’s a very few sites that allow me to share something on my page with edits and photos, for instance.
And then there are the skewed numbers of content quality. Since most marketers are dumping autoposts onto pages and most engaging status updates and posts are done by hand, it would seem to reason that clicks are better from the live user.
In the example here, the two post look very different. It’s NOT the source, it’s the content
Hello there, just became aware of your blog through Google, and found that it’s really informative. I’m gonna watch out for brussels. I’ll appreciate if you continue this in future. A lot of people will be benefited from your writing. Cheers!
thank you
This does raise some good questions. Have you done any specific research on what the difference is in results for automated versus manual updates? And do you have any recommendations for practices and programs? I have surmised that this must be the case but the solution is the problem. How do I improve this?
The answer is not automation.
Regardless of whether you figure out a way to game EdgeRank, the real test of relevance is the relationship with your readers. In fact, calling them “readers” is going to be problem too, as we usually define “reader” as the recipient of one way communication.
Today, all communication is going to conversation. Regardless of the tools you use, humans aren’t so easily fooled.
That being said, I’m all for automation and tools that help organize and encourage real conversation. For instance, this blog post is available to thousands visiting this site. That automation. The comment system allowed you to answer, and me to engage here. Answering comments usually takes me more time than the original post. At first it seemed daunting. Now I’ve put in tools and systems that makes it enjoyable… and I hope more personally rewarding for you