This Page

has moved to a new address:


Sorry for the inconvenienceā€¦

Redirection provided by Blogger to WordPress Migration Service
Evaluating the media: October 2012

PR geekiness - the tools & techniques to gain insights from PR exposure

Wednesday, October 31, 2012

Space on page..not to be discounted offline

I have been running a comparative analysis for a client for the last five plus years and it is noticeable that their printed exposure has been dropping off over the last year or so. Their competitors have not been growing their coverage but my clients share has been falling somewhat faster.

As the volumes of coverage are high and the drop-off only gradual its quite a challenge to explain it away with representative examples. Luckily we track a number of common business areas so its possible point to some over others as explaining the fall but there is more to it than that.

When measuring the media the volume metrics tends to be the default starting, and for many the ending, metric. One of the variables we have tracked is a measure of the space on page; the section of a cutting relevant to the organisation. Combined over yearly periods this illustrates another side to the coverage. While the cuttings are getting fewer, they are on average getting longer. Maybe its the type of stories the PR people are pitching to the media or a structural shift on media's part to do longer articles/inclusions.

Its a metric which does not work so well online but a lot of media is still printed where I think it gives a useful perspective.

Tuesday, October 09, 2012

(a)R(hhh)...that's how it works!

I have done some SQL, HTML and CSS in the past...that's it for my coding. I have always thought it was not 'if' but 'when' would again have to grapple with the viper like serpent thing that is programing.

Well I wrote that a few hours ago when I was stuck at a dead-end, when my code was going nowhere, throwing up an error Google could not help with. But you may be pleased to know things have progressed, and largely at the suggestion of Tony Hirst (@psychemedia) have persisted with with RStudio and from being quite confused have been able to generate some quite exciting results.

What is R and RStudio? R is a computer language based on C and Fortran and focused on the functionality associated with needs of researchers. RStudio is a user interface into R which, while it does not do away with coding, does simplify some of the more tedious processes.

It has been an interesting process and in no way characterises my level of understanding of the code generating these results. And here's the thing....do you need to understand it? If you can appreciate the various processes, where the data calls are made how you can refashion the process to collect a different set of data, is that enough?

On the web you will find more than snippets of code. I used this code submitted by  Gaston Sanchez, which, after a few false starts, proved able at collecting up to 1500 tweets on an ascribable subject area (Starbucks in this case), then cleaning the feed for irrelevances like 'RT', followed by an analysis of relative favourability and emotional associations, both based on a fully trained up Bayes classifier.

The fully code is not much short of a hundred lines; but is it understandable? Well the syntax is challenging, but the instructions are clear enough and once you have your head straight on RStudio, relatively straightforward to initiate. However the issue will come when it does not quite do what's wanted.  Then a lack of coding knowledge might become an issue! But the point to really push is that it has been possible to run some relatively complex process, generating useful results after only a few weeks.

Friday, October 05, 2012

Text analysis using R

Just a brief post to express my excitement at generating some (meaningful) results using R. While many might see it as early days, this is the first tangible list of incidence of key emotions from a Twitter stream relating to Sony mobiles:

> dataSet <- read.csv("SonyUKMentions.csv")
> em <- classify_emotion(dataSet$Summary, algorithm = "bayes")
> print(xtable(table(em[, "BEST_FIT"]), caption = "Tweet emotion"))
% latex table generated in R 2.15.1 by xtable 1.7-0 package
% Fri Oct 05 16:39:53 2012
 & V1 \\
anger &  21 \\
  disgust &   7 \\
  fear &  13 \\
  joy & 275 \\
  sadness &  12 \\
  surprise &  12 \\

All that effort for the last 6 lines! I am going to carry on experimenting with the sentiment package, apply it to other feeds and try some other packages, possibly involving graphs...