Knowledge Bridge Global Intelligence for the Digital Transition 2019-04-26T13:48:42Z https://www.kbridge.org/en/feed/atom/ WordPress <![CDATA[WhatsApp for Radio Toolkit]]> https://www.kbridge.org/?p=3149 2019-04-26T13:48:42Z 2019-04-26T12:07:09Z Guide #6: WhatsApp for Radio Toolkit by Clémence Petit-Perrot and Linda Daniels
The sixth guidebook in our series was created through the efforts in supporting innovation by MDIF’s SAMIP (South Africa Media Innovation Program) and Children’s Radio Foundation. This MAS series of practical guides for media managers focuses on using WhatsApp for radio to reach audiences. The purpose of these guides is to help media decision-makers understand some of the key topics in digital news provision, and give them practical support in adopting concepts that will improve their operations and streamline how their companies work.

About authors:

Clemence Petit-PerrotClémence Petit-Perrot is the Children’s Radio Foundation’s Learning and Innovation Director. She oversees the development all new initiatives within the organisation. Part of her portfolio includes piloting technological solutions like WhatsApp to increase listeners engagement and measure the radio shows’ impact. Before joining CRF, she was the Southern Africa correspondent for Radio France Internationale (RFI). She also worked for the South African production company DOXA, producing social documentary films and leading a digitisation project of anti-Apartheid audiovisual archives.

Linda DanielsLinda Daniels is a journalist by training and has worked in print, digital and broadcast media. She has reported on a range of issues, which include climate change, Intellectual Property and South African politics. Her work has appeared in local and international publications. Between 2013 and 2018, she worked at the Children’s Radio Foundation as the Radio Capacity Building Associate and managed the WhatsApp Integration project.

Please download and share the guide. We would love to hear from you – send any comments or suggestions to us at mas@mdif.org.

]]>
<![CDATA[Guide #5: Introduction to podcasting]]> https://www.kbridge.org/?p=3106 2019-04-26T12:08:26Z 2018-12-07T10:41:39Z Guide-#5: Introduction to Podcasting by Erkki Mervaala
The fifth guidebook in MAS series of practical guides for media managers focuses on Podcasting. The purpose of these guides is to help media decision-makers understand some of the key topics in digital news provision, and give them practical support in adopting concepts that will improve their operations and streamline how their companies work (see Guide #1: Product Management for Media Managers, Guide #2: Launching a paywall: What you and your team need to know, Case studies on paywall implementation, Guide #3: Best Practices for Data Journalism and Guide #4: Facebook News Feed Changes: Impact and Actions).

Guide #5: Introduction to Podcasting, by Erkki Mervaala.

What is needed to start a Podcast?

  • What benefits can a podcast bring to you?
  • What a podcast is and isn’t + technical aspects
  • Planning your production and what you should know before beginning?
  • What equipment and software you need to create a podcast?
  • Recording and editing the audio
  • Feeds and hosting, distribution and promotion
  • Analytics and metrics – finding and using your Podcast data
  • Monetization and next steps


Please download and share the guide. We would love to hear from you – send any comments or suggestions to us at mas@mdif.org.

 

About author: Erkki Mervaala is a former Program Manager and Digital Media Specialist for Media Development Investment Fund. He is also a member of the award-winning Finnish climate journalist collective Hyvän sään aikana and works as the managing editor for the climate news website of the same name. Mervaala has worked as a Central Europe foreign correspondent for several Finnish magazines and newspapers. He has also worked as a screenwriter for Yellow Film & TV, web developer and UI/UX designer. He has been a podcaster since 2008.

You can contact him via e-mail.

]]>
Jeremy Wagstaff <![CDATA[Fighting fake news is not just a journalist’s battle]]> https://www.kbridge.org/?p=3103 2018-12-07T10:47:05Z 2018-11-26T16:15:58Z How do you fight fake news? It’s a question I’m often asked, and as a journalist in the business for 35 years, I find it a frustrating one, because it’s often presented as if it’s a new question, and one that technology can answer. It’s not new, but yes, technology can help — a little.

Fake news is as old as news itself, but at least in the West it was most clearly defined during World War I, when “the art of Propaganda was little more than born,” in the words of Charles Montague, formerly a leader writer and theatre critic of The Manchester Guardian and latterly an intelligence captain in the trenches of France. Montague saw up close the early probing efforts to plant what were then called ‘camouflage stories’ in the local press in the hope of misleading the enemy; one in an obscure science journal which recklessly overstated the Allies’ ability to eavesdrop on German telephone calls in the field.

Montague, as his intelligence bosses, saw the huge potential fake news offered for deception — which, after all, was the business he was in. “If we really went the whole serpent,” he wrote later, “the first day of any new war would see a wide, opaque veil of false news drawn over the whole face of our country.” (Rankin, Nicholas: A Genius for Deception: How Cunning Helped the British Win Two World Wars)

This didn’t happen but there was enough censorship and enough force-fed propaganda of the British and American press for there to be a backlash in the wake of the war, as told in the March 2018 edition of Science Magazine (‘The science of fake news’). The norms of objectivity and balance that most of us abide by or aspire to today are those century-old ones wrought of that conflict.

So it’s worth remembering that the serpent we’re fighting isn’t some newly created hydra born out of social media: it’s age-old the servant of governments, movements, forces who understand well how minds work. But that’s only part of the story.

One of the first problems that media face is that while we stood on our plinths of noble principles those plinths were for decades — nearly a century — built on powerful commercial interests. As the authors of the Science Magazine article put it: “Local and national oligopolies created by the dominant 20th century technologies of information distribution (print and broadcast) sustained these norms. The internet has reduced many of those constraints on news dissemination.”

It has, and very effectively. Not only that, it has helped change the language, format and tone, something we’ve been slow to pick up on. An academic study in 2012 by Regina Marchi of Rutgers University, based on interviews with 61 high school students, found “that teens gravitate toward fake news, ‘snarky’ talk radio, and opinionated current events shows more than official news, and do so not because they are disinterested in news, but because these kinds of sites often offer more substantive discussions of the news and its implications.” She quotes a 2005 study that such formats are “marked by a highly skeptical, alienated attitude to established politics and its representation that is actually the reverse of disinterest”.

Take note the ‘fake news’ reference predates the Trump era by a good three years. But the style, the content, the contempt for fact and sourcing was a trend already visible a decade before Trump and others rode its coat-tails to power.

]]>
Jeremy Wagstaff <![CDATA[Time to thank the porn and gambling merchants. Again]]> https://www.kbridge.org/?p=3101 2018-11-26T08:26:32Z 2018-11-26T08:26:32Z Here’s another technology whose success you might have to chalk up to the gamblers, auctioneers and pornographers of the world: WebRTC.

There’s a pattern you might be forgiven for having missed: most technological developments on the web have been driven by these industries. Think online casinos. Think credit card usage. Think video. Think VR. Users demand greater financial and data privacy — after all, who wants to admit they’re signing up for porn or gambling sites? – and better bang for their buck — a streaming video better be a lot better quality than any of the millions of free sites out there to be worth signing up for, all of which pushes these industries tothe bleeding edge of innovation. Buried in early versions of the Bitcoin software – the cryptocurrency that offers a future of transactions beyond the gaze of banks and governments — are hints of a gambling connection, after all. WebRTC is now no different.

So first off, what is WebRTC, and why should you care? WebRTC is an open source project that embeds real-time voice, texts, and video communications capabilities in web browsers. The technology enables peer-to-peer communication (P2P) among browsers. It does not require specialized software applications or browser plugins for communication. In essence, it’s the engine that powers a lot of the messaging that goes on between apps — think video, audio. The RTC bit in it, after all, stands for Real Time Communication. The Web bit is because the standard at least started life as a way to make using video and audio to communicate through a web browser as easy as using text, without any plug-ins. But it has moved beyond that, and can now sit within apps on mobile phones and elsewhere.

That may seem like quite a modest goal. But it’s only recently, with Apple’s decision to include WebRTC in its Safari browser (from version 11 and in the iOS browser in version 12), that a years-long battle to agree that WebRTC is the standard everyone can agree on is more or less won. Now, you can use a host of services to do Skype-like video and audio chatting via your browser without having to

  1. a) register with any service
  2. b) install any software
  3. c) worry about what kind of browser you’re using.

(Of course, there are still caveats.)

This is quite a feat, if you think about it. Not because you’ve been dying to do this, exactly, but because it means that this kind of capability — real time voice and/or video communication, in real time — can be added to any web page. This means, for example, that you can now stream directly from whatever news event your reporters are covering, or direct from your newsroom, or hold a Q&A with readers, without them having to worry about plug-ins, browser versions, etc. etc.

Beyond the browser

And it doesn’t stop there. WebRTC obviously started with the web, and the browser, but that was because that was where we spent most of our time back then. Now things are mobile, they’re often app-based, and the way we communicate, and the way we consume video, has changed a lot.

WebRTC has ended up being a way for the industry to agree on a bunch of standards which have enabled all sorts of things to happen more quickly and seamlessly than they might have done had WebRTC standards groups not been working away in the background. Varun Singh, CEO of callstats.io, which measures quality of experience of live and real-time media, explained that by agreeing on this standard suite of protocols, WebRTC has quickly wormed its way into pretty much every messaging app, from Snapchat to Facebook Messenger.

And it’s not just there. Dean Bubley, a consultant who has been watching WebRTC since 2009, believes “it’s still lacking in recognition in some quarters; there are many more applications that could benefit from it.” In a white paper funded and published by callstats.io, he explored how real-time communications are becoming embedded into devices and applications, into different formats, into processing (think AI) and more platforms. As users become more accustomed to accepting of voice and video technologies (would you have imagined we would have allowed listening robots called Alexa and Siri into our homes as easily as we have five or ten years ago, or been as comfortable taking selfies or video-ing ourselves?) So these technologies are likely to continue to evolve and, of course, become more commonplace in every day life

In other words, society is now much more amenable to new use-cases for RTC,” concludes Dean. “There is familiarity with many of the UI tools, and less self-consciousness in front of cameras. This,in turn, means that the “cognitive load” on users’ brains is lower, meaning that the interactions are more natural – and more productive.”

It’s only in media, I suspect, that we’re still a little stuck with old formats: the newsreader, the correspondent talking to the camera, the interviewee stuck in a studio somewhere, or on Skype with a picture of a horse in the background. I strongly believe we have an opportunity to have a chance to break away from this.

In media

I see the opportunities here as at least initially more modest.

Firstly, internally: I have long been frustrated at how internal newsroom discussions can be starved of creative oxygen as much by poor technology decisions as by poor leadership. Reliance on dialling in to a conference call number seems both archaic and wasteful use of resources. More often than not the more creative thinkers on the call are drowned out by the noisier ones. A simple WebRTC link in the browser should solve that, using simple tools like talky.io which require no plugins.

Then there’s content. Varun of callstats.io says WebRTC offers content creators the chance to make content simply, just with a camera, which users can access from the news organisation’s website directly — bypassing YouTube or Twitter. Think webinars, game shows, he says: “it’s fairly trivial, say $50 to set something up, and have it viewed an unlimited number of times.”

So where does porn, gambling and auctioning come into it?

Well, I think the future of WebRTC for media lies in the ability to seamlessly stream events to users as if you were a professional broadcaster. It’s one thing to be able to record or stream a few jerky minutes of a demonstration, but soon enough it will be possible — even expected — that any media organisation, large or small, can, with little preparation, livecast an event with little or no lag.

Alexandre Gouaillard, CTO of a company called Millicast, says these industries are the ones pushing for this. With Adobe no longer supporting the Flash plugin, the mainstay for such industries, there’s a demand for low latency video but at scale: Millicast promises broadcast across the globe in less than 500 milliseconds. The packages range from 10 concurrent viewers (free) to 5,000 ($1,500 a month.)

This may be lower latency than required, but it’s a glimpse, I believe, of the future. I think we’ve been held back from using video in part because of its awkwardness in setting up, and the glitchiness, that makes interactions painful. Those days are coming to a close, mainly because of WebRTC.

You don’t have to be into interactive porn to imagine the possibilities of having people seamlessly integrated by a video connection, wherever they are: if your reporter can walk around with a smartphone mounted on a $150 gimbal, confident that every shot is being beamed to every user; or a Q&A with an editor accompanied by graphics and whiteboard is crystal clear to all 5,000 viewers in a monthly editorial catch-up, then perhaps in a year or two an augmented reality- or virtual reality-, or 360-degree- broadcast from a sports game or political rally — then we’ll be able to thank a porn star or a gambler for marking the way for us, once .

]]>
Jeremy Wagstaff <![CDATA[As the Big Data Era Arrives, It Pays To Remember What Data Journalism Is]]> https://www.kbridge.org/?p=3090 2018-11-26T16:17:18Z 2018-10-15T09:22:44Z Data and journalism are natural bedfellows: without information we’d be lost. But has this creation of a sub-discipline that calls itself ‘data journalism’ helped or hindered the profession’s embrace of the digital era?

In researching data journalism in the era of big data, I have found myself trying to define what “data” means in this context. Of course when we refer to data we usually envisage significant amounts of numerical information, but as data sets get unimaginably large this itself becomes problematic. So I decided to take a step back, and look at a successful story that is data driven closer to home.

I chose,a cluster of stories,  from Manila, by the Reuters reporting team of Clare Baldwin, Andrew Marshall and Manny Mogato, who covered Philippine president Rodrigo Duterte’s war on drug suspects. (Transparency alert: I was, until earlier this year, an employee of Thomson Reuters.) The three of them won a Pulitzer  in the International Reporting category for this coverage. I wanted to just focus on a couple of stories because I think it helps define what data journalism is. This is how their story of June 29 2017 described how Philippine police were using hospitals to hide drug war killings:

  • An analysis of crime data from two of Metro Manila’s five police districts and interviews with doctors, law enforcement officials and victims’ families point to one answer: Police were sending corpses to hospitals to destroy evidence at crime scenes and hide the fact that they were executing drug suspects.
  • Thousands of people have been killed since President Rodrigo Duterte took office on June 30 last year and declared war on what he called “the drug menace.” Among them were the seven victims from Old Balara who were declared dead on arrival at hospital.
  • A Reuters analysis of police reports covering the first eight months of the drug war reveals hundreds of cases like those in Old Balara. In Quezon City Police District and neighboring Manila Police District, 301 victims were taken to hospital after police drug operations. Only two survived. The rest were dead on arrival.
  • The data also shows a sharp increase in the number of drug suspects declared dead on arrival in these two districts each month. There were 10 cases at the start of the drug war in July 2016, representing 13 percent of police drug shooting deaths. By January 2017, the tally had risen to 51 cases or 85 percent. The totals grew along with international and domestic condemnation of Duterte’s campaign.

This is data journalism at its best. At its most raw. The simple process of finding a story, finding the data that supports and illustrates it and then writing that story and using the findings to illuminate and prove it. Of course, the data set we’re talking about here is smaller than other data-driven stories but it’s still the point of the story, the difference between a story and much lesser one.

But how did they come by that data? Andrew tells me it was done the old fashioned way: first, they got the tip, the anecdotal evidence that police were covering up deaths by using ambulances to carry away the dead. Then they went looking for proof — for police reports, which are public information in the Philippines and so can, in theory, be obtained legally. These they found, because they looked early and persistently. Then it was a question of assembling these, and cleaning them up. In some cases, it meant taking photos of barely-legible police blotters at a station entrance.

All their stories, Andrew told me, were driven by the reporters already having a sense of what the story was, and then looking for proof. That means already knowing enough about the topic to have formed an opinion about what to be looking for: about what may have happened, about what angle you’re hoping to be able to prove, about what new fresh evidence you believe the data will unearth for you. A data-driven story doesn’t always mean wandering around the data without a clear idea of what you’re looking for. In fact, it’s better to already know. “The key thing,” he told me, “is that this all grew out of street reporting. We wouldn’t have thought to look for it if we hadn’t heard.


That’s the first lesson from their experience. Data is something that is there that helps you prove — or refute — something you have already established to be likely from sources elsewhere. 

This is where I think sometimes data journalism can come adrift. By focusing too much on the “data” part of it, we lose sense of the “journalism” part of it. “It’s the blend of street reporting and data analysis that paid the great dividend,” Andrew said.

A  definition of data journalism should probably start somewhere there; but it tends not to. Instead we tend to get: data journalism as a “set of tools’, or “locating the outliers and identifying trends that are not just statistically significant but relevant”, or “a way to tell richer stories” (various recent definitions.)  These are all  good, but I’m not sure they’re enough to help us define how to best use data for journalism.

By emphasizing data over journalism we risk removing and rarifying the craft, creating a barrier where it doesn’t need to exist. As in the previous examples in the Philippines, data is not always something that sits in databases, servers, libraries or archives. Nor is it something that you have to ask for. It’s something you use to gather information to help better storytell and to reinforce the facts in your coverage.  A study by Google last year trumpeted that more than half of newsrooms surveyed had a dedicated data journalist.

Aren’t we all, or shouldn’t we all consider ourselves, data journalists? Shouldn’t we all be looking for data to enrich — if not prove the thesis that underlies — our stories? 

Back to Andrew’s example. For the team it was something of a no-brainer to work on attaining this data. The story would have been unthinkable without it. This might not be part of every journalist’s instinct, but it’s telling in this example, that it became central to their story and took weeks, months to assemble.

The place to start from was with the local police and hospitals to get this data.  To do so was legal. But it wasn’t easy, and became increasingly less so as the work developed.. Clare Baldwin was greeted at one station by homicide detectives who shouted and lifted their shirts to display their guns. Later, Andrew told me, it became much more difficult to have access to this information as the Duterte government realized what it was being used for.

The lesson from this is that data is not necessarily something that is easy to get, or easily given up. Or that arrives in pristine form. It requires some major work in verifying, identifying and compiling. is more akin to the example of Bellingcat, the crowdsourcing website created by journalist Eliot Higgins, which conducts what it calls open-source investigations of data sources, ranging from social media photographs to online databases.

Of course, not all stories are going to be like this, and not all data is going to be like this. But all journalism, data or otherwise, requires thinking that starts from a similar place: a strong knowledge of what the story might be, and where to find it; whether there might be data that might help, to know where to find it, to not be daunted in obtaining it, or by the condition it is in, and to understand the context of the data that you have, and to know what to do with it. And finally, in Andrew’s words, “to use that data quickly, not just to sit on it“.

The aforementioned team’s stories stand on their own.  As another example, the Reuters’ graphics team, led by Simon Scarr, also did some extraordinary visualizations which helped readers understand stories better, and provided additional impact. Visualization and data journalism are obvious bedfellows.

This isn’t to say sometimes the idea for a story doesn’t lie in the data itself. Data journalism can mean taking data as inspiration to explore and write a story — rather than beginning the process by talking to sources.. At its most basic this could be a simple story about a company’s results, or a country’s quarterly trade figures — data-driven stories where the journalist reports the new numbers, compares them with the earlier numbers, and then adds some comment.

But when there is overemphasis on data journalism as a separate part of the news process it can pose problems. There’s been quite a lot written about a backlash against ‘nerd journalists’ and an exodus of those computer-literate staff in newsrooms who are sick of the skepticism and relatively low salaries. I’ve not witnessed this firsthand, but I have seen how little interest there is in learning more about the ‘techie’ side of journalism that might help reporters wrestle with data beyond their familiar charts and tables. Editors are partly to blame: stories that involve dirty or larger data-sets do take longer and so are often unwelcome, unless they fall into a special category. So reporters quickly figure out they’re better off not being overly ambitious when it comes to collecting data.

Data journalism tends to be limited to a handful of really strong players. In my neck of the woods in South and Southeast Asia there’s an impressive array of indigenous (i.e. not one of the big multinational) outfits: Malaysiakini are almost old hands at this process now.  Their sub editor,  Aun Qi Koh told me that as it gets easier in terms of knowing which tools to use and how to use them, so it gets harder because “we want to push ourselves to do more, and to better…and of course, there’s the challenge of trying to streamline a process that often involves a team of journalists, graphic designers, programmers, social media marketers and translators.” she tells me.

This is impressive, and is demonstrating what is possible. News organizations are making the most of governments’ gradual commitment to opening up their data, and to leveraging issues that the public care about. In the Philippines Rappler has been making waves, and won an award for its #SaferRoadsPH campaign, which compiled and visualized statistics on road crash incidents and has led to local police drawing pedestrian lanes outside schools.

These kinds of initiatives are tailor-made for visual data journalism. Not least because journalists don’t have to rely on government data that might be either absent, incomplete or wrong. Or, in some cases, just unreliable. Malaysiakini’s Aun Qi Koh said that the data in a government portal set up in 2014 was neither “organized properly nor updated regularly.” That seems to be par for the course. And while staff everywhere need better training, those that do have the necessary training tend to get snapped up by–and attracted to–private sector companies rather than relatively low paying journalist positions, according to Andrew Thornley, a development consultant in Jakarta.

I’m impressed by all these projects, especially those doing great journalism on a shoestring. But I hope it doesn’t sound churlish if I say I still think this is scratching the surface of what is possible, and that we may not be best preparing ourselves as well as we could for the era of big data.

Take this story as an example: Isao Matsunami of Tokyo Shimbun is quoted in the Data Journalism Handbook as talking about his experience after the earthquake Fukushima nuclear disaster in 2011: “We were at a loss when the government and experts had no credible data about the damage,” he wrote.  “When officials hid SPEEDI data (predicted diffusion of radioactive materials) from the public, we were not prepared to decode it even if it were leaked. Volunteers began to collect radioactive data by using their own devices but we were not armed with the knowledge of statistics, interpolation, visualization and so on. Journalists need to have access to raw data, and to learn not to rely on official interpretations of it.”

The data he’s talking to was created by Safecast, an NGO based in Japan which started building its own devices and deploying its own volunteers when it realised that there was no reliable and granular government data on the radiation around Fukushima. Now it produces its own open source hardware and has one of the largest such data-sets in the world, covering air quality as well, covering sizeable chunks of the world.

The future of data journalism lies, I believe, in exactly this: building early, strong relationships with outside groups — perhaps even funding them. More routinely, journalists should find their own sources of raw data where it’s relevant and practical, and fold the mindset, tools and approach of data journalism into their daily workflows the rest of the time. You can already see evidence of the latter on sites like Medium and Bloomberg Gadfly, where journalists are encouraged to incorporate data and charts into their stories and to build an argument. Much of this is already happening: Google’s survey last year found that 42% of reporters use data to tell stories twice or more per week.

But the kind of data being used may be open to question. Data is no more a journalist’s friend than any source — it has an agenda, it’s fallible, and it can often be misquoted or quoted out of context. As journalists we tend to trust statistics, and interpretation of those statistics, a little too readily.

For the sake of balance, here’s a Reuters story from 2014, still online, that quotes an academic study (“Anti-gay communities linked to shorter lives”) despite the fact that in February this year a considerable correction was posted to the original study. (“Once the error was corrected, there was no longer a significant association between structural stigma and mortality risk among the sample of 914 sexual minorities.”) We are not, as journalists, usually given to expressing skepticism about data provided by academics and similar but maybe we should. (And I suppose we should be better at policing our stories, even if the correction is required years after the story first appeared.)

Tony Nash, founder of one of the biggest single troves of economic and trade data online at CompleteIntel.com, believes journalists tend to let their guard down when it comes to data: “The biggest problem with data journalism is that data is taken at face value. No credible journalist would just print a press release but they’ll republish data without serious probing and validation. Statistics agencies, information services firms, polling firms, etc. all laugh at this.”

Day to day journalism, then, could benefit from being both more skeptical and more ambitious about the data it uses. Tony says he’s tried in vain to interest journalists in using his service to mash stories together, so instead writes his own newsletter, often ‘breaking’ stories long before the media: “In July 2017 I showed that Mexico and China are trade competitors but journos always believe China has an upper hand in trade. For all of 2017, Mexico exported more TVs to the US than China. For the first time. It was not a surprise to us. Most journos still have not woken up to that,” he told me recently.

Coupled with tools that make it easier to combine visuals into their stories — Datawrapper, a chart making tool, for example, has launched an extension called River which makes it easier for journalists to identify stories or add data to breaking stories.

But this is just the start. We are in the era of big data and we are only at the beginning of that era. The Internet of Things (IoT) is a fancy term to cover the trend of devices being connected to the internet (rather than people through their devices, as it were.) There will be sensors on everything, but there will also be light switches, washing machines, pacemakers, weather-vanes, even cartons of milk, telling us whether they’re on or off, full or empty, fresh or sour. All will give off data. Right now only about 10% of that data is being captured. But that will change. According to IDC, a technology consultancy, more than 90 percent of this IoT data will be available on the cloud, meaning that it will be analyzed, by governments, by companies, and possibly by journalists. The market for all this big data, according to IDC, will grow from $130.1 billion in 2018 to over $203 billion in 2020. This market will primarily be one about decision making: a cultural shift, to “data-driven decision making”.

You can see some of this in familiar patterns already: Most of it is being used to better understand users — think Amazon and Netflix getting a better handle on what you want to buy or watch next. But that’s pretty easy. How about harder stuff, such as taking huge disparate data sets — the entire firehose of Twitter, say, along with Google searches, Facebook usage (all anonymized of course) — to be able to slice target audiences very thinly. One Singapore-based company I spoke to has been able to build a very granular — as they call it — picture of who they want to target, down to the particular tower block, their food preference (pizza), music (goth) and entertainment (English premier league). Not only does this make advertisers happy they’re going after the right people, it makes it much cheaper.

But this is just the beginning of big data. Everything will be spitting out data — sensors in cars, satellites, people, buildings; everything we do, say, write etc. Knowing what data there is will be key: Another Reuters graphics story — which won the award for Data visualization of the year at the Data Journalism Awards 2018 involved realising the value of a data-set of GPS coordinates of infrastructure gathered by aid agencies working on the ground at a Rohingya refugee camp in Cox’s Bazaar to analyze the health risks of locating water pumps too close to make-shift toilets. And then there’s knowing whether there might be other data hiding within the data: Buzzfeed’s application of machine learning to Flightradar aircraft data to single out the clues that revealed hidden surveillance flights, which also won a Data Journalism award.

These are small glimpses of the future of the kinds of data journalism we might see.

In the future it will be second nature to journalists to not only know what kind of data is being collected and to turn it to their own uses, but to try to pre-emptively collect it. This will require lateral thinking. Journalists have been using satellite imagery for several years as part of their investigations but this is likely to become even easier, cheaper, and more varied. One entrepreneur I spoke to recently is launching dozens of micro-satellites to monitor methane emissions — data of interest to oil and gas companies worried about gas leaks, governments enforcing greenhouse gas regulations, as well as hedge funds looking for exclusive economic indicators. Imagine if a journalist is able to peruse that data and uncover military activity from heat emissions even before governments know about it.

This is just the tip of the iceberg, and while journalists may not be at the front of the queue for this kind of data, it’s going to be important to know what kind of data is out there. Already the notion of what a “leading indicator” is has begun to change — an investor in China is much more likely to be trawling through data from Baidu than government statistics to get a sense of what is going on, and smart journalists already know that.

The future of data journalism, if it is successful, will still be journalism. And data will still be data. But as the world of data gets bigger, it pays to remember that the relationship between ‘data’ and journalism is still about thinking and acting creatively and quickly to uncover stories others may not want us to tell. 

]]>
Jeremy Wagstaff <![CDATA[Podcasts: Celebrate the resurgence but be cautious]]> https://www.kbridge.org/?p=3083 2018-10-15T09:49:17Z 2018-08-15T07:50:52Z

Tech trends are fickle things. Back in 2004, if you were starting a media business online, or thinking of expanding your offline media business, one direction seemed obvious: adopt RSS, or really simple syndication, so users can get a feed of your content easily, without signing up for newsletters. The term ‘RSS’ overtook ‘newsletter’ as a search term on Google in July of that year.

A year or so later, and your crack team of tech advisors would have told you you need to get into podcasts. Everyone has an iPod, they’d tell you, and everyone is listening to this stuff. Indeed, by early 2006 ‘podcast’ had overtaken ‘RSS’ as a search term on Google. Ditto MySpace — you would have been told to get your business on this impressive social networking site — whatever that is, you would have been forgiven for thinking back then. So you start work on that.

Then, in 2009, the Amazon Kindle e-reader swept out of nothing to make electronic publishing the wave of the future, overtaking both ‘podcast’ and ‘RSS’. And then, of course, there was Facebook. And Twitter.

You get the picture: sometimes inexorable trends aren’t what they seem. RSS, it turns out, was great for delivering information to people but was too fiddly for most folk. Google, whose RSS reader had pushed most other players out of the business, closed down in 2013, citing declining use. Meanwhile newsletters, those unsexy throwbacks, are still doing fine.

So what about podcasts? Were the advisors right? Well, yes and no.

True, interest in podcasting (as a search term on Google, as reliable an indicator as any) peaked in early 2006. Interest continued to decline until the launch in late 2014 of Serial, whose first season explored a murder in Baltimore in 1999, singlehandedly pushed the podcasting niche into the mainstream. In short, podcasts are that rare breed among tech trends: they’re getting a second wind.

So what is driving this, and are podcasts worth doing?

Well, it’s true that Serial jumpstarted a fresh wave of interest. The appeal of podcasts is that they time-shift — users play them when they want, in the order they want, where they want. This may seem obvious, but Serial added a key ingredient: the serialized approach, where the story was being shaped as it went. This invited audience participation, suspense and a feeling that it was unclear where it was going.

All these elements helped differentiate podcasts from other forms of entertainment. At the same time, those coming in late could easily download old episodes: ‘Bingeable listens’ is even a category on iTunes, still the epicentre of podcasting.

The data all point to a growing market. Most figures are U.S.-centric so let’s look at another market: Australia. Recent surveys there suggest that nearly 9 million people will be listening to podcasts by 2022 — a third of the projected population.

Big players are taking note. Apple is improving its metrics, and applying some standards to podcasts it accepts for its iTunes platform and podcasting app. After leaving the field alone for years, Google is jumping in with its own Android app. Amazon has tried to add to its Audible audiobook service with some original programming, although it’s not clear how well that’s going.

Investors are interested: Luminary Media secured $40 million in venture capital funding for its subscription-based service. And of course Spotify has added NPR’s backcatalogue to its subscription service. Companies like Audible and Spotify are already in a sweet spot because they have already convinced users to subscribe. Most podcasts are free, and it’s hard to change users’ minds, as we’ve found to our cost in online journalism.

But of course, as we’ve learned from the past: trends can be reversed, even when they’re enjoying a second life. So will podcasts wither too?

Here’s how I see it for media players. Don’t do podcasts as an afterthought; it’s your brand and if you mess it up listeners might not come back. But do see how much you can do without having to create content afresh. If you’re in the spoken word business already, then package up 10 of your best programs and see, after a year, which ones are gaining a following.

And despite the talk of growing investment and advertising interest, don’t do it for the money. The industry is still too young and unstructured, the hits too unpredictable. The Interactive Advertising Bureau has released two sets of proposals to regulate advertising metrics across the industry, and uptake has grown. But some podcasters are nervous because their reported download numbers would inevitably take a knock, at least in the short term.

Then there’s the problem of the elephants on the grass. Apple dominates the space because no podcast can afford to not be on its platform. Google is now serious about podcasts, which could be good news for podcasters in Android-heavy markets. But the app is still pretty raw, and of course will only work on Android devices, leaving those cross-platform podcast players like Overcast more appealing to many.

These big players all seek to control the choke-points in the system. They can, like Apple’s AppStore, create markets, but they can also trample them.

And there are lots of pieces missing, another sign of a wild west. The technology of inserting ads, for example is still not quite there. The Washington Post last month (eds: July) felt it necessary to develop its own internal technology, Rhapsocord, for inserting ads into podcasts. This reminds me of the early days of the web, when everything was so new we didn’t even think of calling it an ‘ecosystem.’ Only a handful of companies survived that.

It is possible to cover costs, and attract advertisers, and should soon be possible to weave podcasts into broader subscriptions. But right now it’s probably better to think of honing your podcasting skills and ideas than of viewing it as a revenue stream in its own right.

]]>
Jeremy Wagstaff <![CDATA[Talking Heads: Speech recognition tools could help ease newsroom’s great bottleneck]]> https://www.kbridge.org/?p=3075 2018-08-07T10:31:54Z 2018-08-07T10:28:10Z

The bane of any reporter’s life is returning from an interview and then having to transcribe the recording of it. Text reporters can get away with some shorthand and a few notes, as they probably only need a quote or two. But radio and tv journalists, and those seeking to squeeze a little more out of an interview, are stuck with the painstaking process of going over the recording and either correcting their notes or typing out the transcript afresh. It’s not fun.

Technology has been promising to fix this for a while. There have been products like Nuance’s Dragon NaturallySpeaking, which since the late 1990s has been chipping away at speech recognition, but this required training the software to be familiar with your voice, didn’t work well with other people’s and was, at least for me, a little too error prone to be genuinely useful.

But there are now options.

I’ve been testing a couple — Trint (trint.com) and Descript (descript.com) — which do an excellent job of automatically turning an interview recording into a transcript you can work with. And they’re relatively cheap: expect to pay about $15 for an hour’s worth of audio. It’ll take about five minutes for the transcript to be ready, and then both provide pretty good editors (Descript in app form, Trint in a web app) where you can tidy up the text and fix errors. The underlying audio is mapped to the text, so editing text and moving through the audio is painless. Keystrokes allow you to switch quickly between listening and editing. Descript even lets you edit the audio, so you could prepare an interview for broadcast or podcast.

I would say on the whole you save yourself a couple of hours per hour of audio. For a journalist this means you can the semblance of a transcript to work off within minutes of the interview finishing. If you’re under time pressure that’s a serious time saver.

There are several other apps offering something similar: Otter is an app from AISense that is in essence a voice recorder that automatically transcribes whatever is being recorded. In real time. Temi and Scribie are also worth checking out.

So how does this work? And why now? Well, as with a lot of tech advances it has to do with algorithms, cloud computing and data. The algorithm comes first, because that is the part that says ‘this sound is someone saying hello. So type ‘hello.” In the early days — before cloud computing came along — that algorithm would have to be very efficient: it needed to be good because it had to work on a personal computer, or mobile device.

Cloud computing helped change that, because then the companies trying to do this were not constrained by hardware. In the cloud they could throw as much computing power as they wanted at it. But it doesn’t mean that computers are doing all the work — the algorithms still need something to work from, examples they can learn from. So a lot of the advances have come from a hybrid approach: humans do some of the work and train the computer algorithms to get better.

And now, at least in the case of the ones I have played with, the job has now been handed over to algorithms entirely. (And with each bit that we correct in their app, they learn a little bit more.) These example-driven algorithms have replaced the old classical ones which must be trained precisely. These algorithms teach themselves; you simply give them a bunch of data, and tell them: ‘this is how people have transcribed it, now go away and figure out how to do that.’

This means I have to add a few caveats. This kind of machine translation is not trying to perfectly transcribe each utterance. It is applying what it has learned from previous transcripts, so if those transcripts aren’t great, the results won’t be great. This can be good: Trint, for example, leaves out a lot of the verbal tics we use in speech — ers, ahs, ums — because human transcribers would naturally do that. But it also can mistranscribe whole sentences which make sense, but bear no relation to what the speaker said. So whereas in usual transcriptions you might be scanning for the odd misheard word or mis-spelling, you need to keep an eye out for entirely incorrect phrases. This could be fatal if you end up using a quote in a story!

There’s a bigger caveat too: accents can easily put these services off. Trint can cope with most European languages but in one case it could not handle someone speaking English with a Middle Eastern accent despite their grammar and syntax being excellent. Likewise, when I used Trint’s option of selecting an Australian accent (over a North American or British one, the other options) for the transcription, the Australian interviewee appeared to be talking about crocodiles, tucker, barbies and tinnies, and other Australiana, whereas in reality he talked about nothing of the sort. The training data was used to such terms and must have applied higher probabilities to him using those words than what he actually said.

This means that I would not be confident recommending any of these services to situations where non-European languages are being spoken, but also those when a accent is used. This is largely because of a lack of freely available training data. Academics are working to fix at least some of these problems: I’ve seen recent papers addressing indigenous South African languages, as well as those where speakers switch between languages, such Frisian-Dutch.

Give these apps a chance if you haven’t already. Behind this is a big step into the future where computers can more readily understand what we say, and what we say can easily be transcribed and stored. It has both exciting and scary implications. But for journalism it helps ease a significant bottleneck in getting what people are saying into our stories.

]]>
Jeremy Wagstaff <![CDATA[The Blockchain and Journalism: Saviour or Snake Oil?]]> https://www.kbridge.org/?p=3035 2018-08-20T08:09:29Z 2018-07-26T08:10:25Z

We are currently in a phase of seeing in blockchain, the ledger technology that underpins cryptocurrencies like Bitcoin, the solution to problems in nearly every industry. There is something alluring about a technology that is so easy to set up, does not require a leader or central controller, and which can store anything permanently. But would it work for media?

First off, it’s worth talking about what blockchain is – and isn’t. Blockchain is the name we have given the underlying database system created by Satoshi Nakamoto, the so-far unidentified maker of bitcoin. Nakamoto needed to solve a couple of problems if he (or she) was to create a digital currency that was unhackable. He first needed to get around the problem of copying: when it’s easy to copy something digital, a digital currency needs to not only be impossible to duplicate, but also everyone needs to be able to see that it cannot be duplicated. The other problem he wanted to get around is to make sure the process of recording and validating transactions was not dominated by one person, either relying on some central authority, or that someone could take over the system and manipulate transactions.

These problems were neatly solved by the blockchain. Embedded in software, copies of the ledger of transactions would be stored on multiple computers — basically anyone who wanted to join in. All subsequent transactions would be added to the ledger in sequential order, connecting each block cryptographically, so any attempt at tampering with the record would be discarded. The task of recording those transactions would be done by miners — people who had a copy of the blockchain on their computer, and used their computer’s resources to run through permutations until they found a particular sequence of digits. The first to do so would have the right to add a block to the chain, and earn bitcoin. This created an incentive for people to participate in storing copies of the blockchain, and to record the transactions.

So what has all this got to do with journalism? Well, the blockchain has lots of parts to it that appeal. Together they offer what some believe would be a different way of connecting the components of the media economy — those who produce content, those who consume it, those who publish it and those who currently finance it through advertising.

To understand this, it’s better to take each part of blockchain’s appeal one by one.

Micro-transactions

Micropayments — basically defined as pay per article — have long been the holy grail of digital media, as an adjunct to rather than a replacement for subscription and advertising. The idea is compelling because it means that people who care enough about a single piece of content could pay for it; no need for a subscription, no need to whip out a credit card, no need to think too hard about whether the content is worth it.

The reality is that micropayments will only work when the transaction costs are reduced to near zero. This hasn’t happened because we’re still using credit cards, or a variation thereof (PayPal, Apple Pay), where costs remain high. The alternative is to store value elsewhere — in a wallet, say — which is then transferred in the form of micropayments. But it is still one or two many steps for most users. Wallets are only appealing to users when there are obvious benefits or no alternative. Think ride sharing, or bike sharing. If I can only unlock a bike by uploading credit to their account, then I’ll do it, but I won’t be happy, because that money is all locked with them. And in a recent case, the money might disappear entirely if the company closes down suddenly. Expect wallets to get a bad rap from here on in.

So how could blockchain help? The first, and perhaps only, proven use case of blockchain is Bitcoin. The Bitcoin blockchain network has been running for nearly 10 years, and has an uptime of 99.992364787%. (Really.) So it’s a proven payments system. Unfortunately, it’s also hugely expensive: to move bitcoin from one address to another (in other words, to make a transaction) is still costly — often more than the value of the transaction itself. This is because the miners — the people running the computers that are adding blocks to the blockchain — need an incentive beyond earning bitcoin from the correct ‘guessing’ of the mathematical puzzle. So those making a transaction add a ‘tip’ to the transaction request to bump it up the queue. All this is hugely inefficient and often means transactions can take hours to be recorded. This is fine if you are moving large amounts, or if you just see bitcoin as a valuable commodity, but this is not what Bitcoin and blockchain were designed for. The idea is that it would make it possible for people to transact simply, securely, and without anyone creaming anything off the top (or blocking the transaction.)

Now we’re getting closer. If cryptocurrencies can overcome some of their limitations — high transaction costs associated with adding blocks to the chain, poor usability and security issues — then they definitely offer a way forward. You’d still have to convert from fiat into a currency or token, but that might be feasible if the transaction costs can be lower than real-world transactions. This will probably happen first on Bitcoin Cash, which is what is called a ‘hard fork’ from the original Bitcoin (now called Bitcoin Core.) Bitcoin Cash adherents talk about reclaiming Bitcoin from the Core people by focusing on increasing the transaction volume; Bitcoin Cash supports about 100 transactions per second. Compare that to Bitcoin Core’s 7.

One example of a media company exploiting this is Yours.org. The site, according to its founder Ryan Charles, is a platform designed to reward content creators. After trying several other cryptocurrencies unsuccessfully, they turned to Bitcoin Cash allowing users to charge for content if they wished, using Bitcoin cash. One user, Rivers and Mountains, charges about $5 for the full article after a short précis. His articles (about Bitcoin Cash, mostly) earn him up to $900 each. (Yours charges 10 cents to post content and takes 5% of purchases.)

As the founder of Yours, Ryan Charles, puts it: “With Bitcoin Cash we have actual low fees and the payments are actually irreversible. This is kind of amazing. This didn’t used to exist. This is actually the fantasy of micropayments that people have talked about since the 1990s. We actually have it for real starting last August.”

Tokenisation

Yours.org is somewhat unusual in that no extra tokens are involved. You buy Bitcoin Cash and you use that to pay for content. And that may end up being the most popular way of doing things. But most other platforms in this space use their own tokens — basically a version of bitcoin on its own blockchain, like a separate currency. Remember, a blockchain is like any database, it can pretty much store whatever you want on it. Bitcoin, a digital currency, was the original use case which inspired (and required) blockchain. But anything could be stored there, usually in token form.

Steemit, for example, is a social media platform that rewards users with its own token — Steem — which can be converted into dollars on an exchange. And, like Yours, not only are content creators rewarded but anyone clicking on like buttons, adding or voting on comments. All this, the argument goes, helps to oil the ecosystem and promote better content. (Steemit is doing very well, although you might think of it as more of a social platform than a media one. Steem’s market capitalisation — if you add all the tokens together and sell them at the present price — is more than $400 million at the time of writing. It has about half a million accounts.)

Having tokens opens up new ways to move value around the system. Brave, for example, is essentially a browser like Safari, Firefox or Chrome, that among other features builds into it a way for users to reward publishers. It works like this: download Brave, buy some Brave Attention Tokens with a cryptocurrency like bitcoin, and then decide how much you’re going to pay your favourite websites each month. So long as you use the Brave browser to visit those websites, this will be calculated and distributed automatically.

This is probably too many steps for most people, but it’s a start. Brave, like a lot of blockchain-based startups, raised money through something called an Initial Coin Offering, or ICO. An ICO is a bit like an IPO, in that those who are enthusiastic about the business buy into it by buying the tokens. Owners of those tokens can then use them in some way tied to the service. But as I explain below, this is not quite as simple, or legal, as it sounds.

In theory though, the idea is simple enough: those holding tokens can reward other people on the same platform — Brave, for example — for activity that benefits them. The token is a currency, but with benefits. The obvious one is paying for articles but they could also be used, for example, to reward contributors to an article (crowd-sourcing): So the author could ask for data for an article, and disperse tokens matching the size of each contribution.

The potential of tokens is that could unleash all sorts of new micro-transactions, effectively monetising content and content-related activity that so far has not been monetised. For example, existing content — think all the stories lying in archives, photos, videos, lying in archives that can’t be readily monetised because it would be too administratively cumbersome. Each item could be indexed to the blockchain and each user charged via a smart contract (more on them below.) Tokens promise a way to increase the overall value of content and access to content created by other readers. So, for example, you could charge users to comment on certain stories, effectively filtering out (some of the) time-wasters and trolls, and thereby increasing the value of the content produced by commenters by raising the bar. Similarly, you could charge for access to that content, creating a sort of club of readers. These costs would be small, but might act as enough of a barrier to flippant and time-wasting content.

Fighting censorship

Another appeal of blockchain technology is its distributed, decentralised nature. The blockchain — the ledger of transactions, but also potentially the database of the content itself — is not held anywhere centrally, so no one person or institution, in theory, can close it down or change that content. (Nor, in theory, can they monitor it because, while the transactions themselves may be visible to all, who or what exactly was transacted may not be. The Bitcoin blockchain, for example, can be explored in detail, but all you can see are amounts of bitcoin, bitcoin addresses (where the bitcoin travelled from and to) and the date and time of the transaction. Deeper study might reveal patterns, especially when an address is linked to an individual. But it’s detective work, and still more art than science.)

The blockchain — the ledger backbone — can therefore not be easily destroyed, disrupted, hacked or altered. That means it is more likely to survive some government’s, or individual’s, attempt to stop information from finding its way out. But it also means that the information stored on it has credibility: it’s much more likely not to have been tampered with, and because it has timestamps and other data attached, can be relied upon as an accurate record of what happened.

So several outfits are exploring opportunities the blockchain offers to reduce the potential for censorship. Publiq, for example, offer their distributed storage infrastructure to store all content — think of a peer to peer network like Napster, where users who host the content would be rewarded with Publiq’s tokens. A hash — a short unique code — of each piece of content on Publiq’s P2P network would be stored in Publiq’s blockchain, so any corruption of the data, intentional or otherwise, would be noticed straight away. So no-one can alter or remove the existing content — think censors or fake news hackers — but the content can be amended. So authors could for example add notes or corrections to the original content. Publiq’s Gagik Yeghiazarian tells me this has already happened, when one content provider was able to add amended transcripts to a story that some readers rightly claimed was incorrect. The correction was added to the original content, leaving it annotated but intact. With even mainstream news organisations over-writing or ‘fixing’ content without properly flagging it to readers, this feature is welcome.

Funding

The obvious way to reward journalism — the creators of content — via the blockchain is to remove the obstacles preventing people from paying for content that has been created — the micropayments model described above. That’s great for people who can create content on spec — in the hope that someone is going to reward them for it, either through tips or through a paywall.

Then there’s the funding of proactive journalism: the crowdsourced model, where enough people believe the story or content will be worth consuming that they’re ready to pay for it in advance. Similarly, the blockchain can help, not only with micropayments but with an immutable record of who paid what for what, and what that entitles them to.

But what about bigger content that require deeper funding — an investigative podcast series, say, or a documentary? Companies like Qravity are looking to break down the various tasks of a project, which are then farmed out to team members. Instead of being paid for their contribution they’re given tokens, which give them a stake in the ownership of the project proportionate to the number of tokens they hold. They’re therefore transparently tied to the project and can share in its success.

Content licensing

And then there’s monetising that content once it has been created. Or finding a way to monetise one’s archive. Take AllRites, a media company based in Singapore which handles licensing of content on behalf of major TV and video players in the region. They’re creating a platform that would move this marketplace onto the blockchain, in theory simplifying the licensing of that content, while also opening up B2C licensing — where you or I could buy streaming rights to movies, tv shows or documentaries by the hour, say — as well as a content funding platform. Initially, content would be represented on the blockchain via a unique identifier, but ultimately, their CEO Riaz Mehta hopes, technology will allow the content in its entirety to be stored on a blockchain, simplifying the process.

So why? What’s the point of this? There are several advantages, Mehta says. First off, an ICO allows them to raise money that venture capitalists would never provide because they wouldn’t see the longer term, and they’re only just starting to get blockchain. “For them,” Rias told me, this is the frontier land and they’re very cautious about what they put into it.”

But more significantly, they believe that not only will it make for a more efficient marketplace, but that content locked up in the long tail of providers could be more readily found and monetised. By registering the content on AllRites’ blockchain even niche providers or content creators themselves would be able to prove their rights, advertise their wares and sell to a much larger market.

There are other efforts in this area. Po.et is a shared ledger recording ownership and metadata for digital assets. Qravity is both a studio and a distributor of content created by decentralised teams. Both aim to build platforms that level the playing field for content makers. In other words, disintermediating the middle parts that conspire against smaller producers of content.

Smart contracts

A key part of the appeal of blockchain is the idea that embedded into it could be more than just tokens of value. You could store the content itself, theoretically, or you could store applications — code that actually does something. These are, or could be, smart contracts — a piece of code that, in its simplest form, kicks off an action, or sequence of actions, based on some input. So, on a certain date, ownership of a token could change hands. So imagine I have loaned a video to you, the record of which is stored on the blockchain. When that loan expires, ownership (and control) of that video returns to me. Part of the smart contract could delete it from your device, say, or require you to extend the loan, releasing tokens to me in payment. These smart contracts would effectively unleash a lot of the potential for what is otherwise just another database. Qravity, for example, plans to use smart contracts to determine how many tokens to distribute to each member of a team based on their contribution, in the example described above.

Warnings

ICOs

There are concerns however. Quite a few of these blockchain companies are launching, or have launched, initial coin offerings, or ICOs, to raise funds. These have proved very lucrative for some startups, but their appeal is beginning to fade. Regulator anxiety is forcing every ICO to move away from calling their tokens securities, for one thing — so while they are still raising money from the sale of tokens, those tokens do not represent a stake in the underlying company. Instead the tokens will be used to buy services or products. And there’s the issue of incentive: if companies do raise money through ICOs, how can they be held to account over how that money is used?

The first problem with ICOs and the subsequent blizzard of tokens is: why? Why can’t people just buy those services with their own currency? The argument is usually two-fold: the tokens allow money to be transferred without constraints of borders/currencies, and secondly, that it allows more value to be transferred than was available previously. Brave moves tokens (value) between the three main pillars of media — the users, the publishers and advertisers. So users, for example, can earn money directly from advertisers by giving their consent to view ads. Similarly, they can reward content producers on a proportional basis depending on how much of their content they viewed during a month.

But I think the bigger worry is that these systems are too complicated for the end user. I tied myself in knots trying to chart the various transactions that would take place within Brave’s ecosystem. And that was one of the simpler ones. These complicated arrangements may work in B2B, but the diagrams that accompany nearly all these models highlight the same problem: For the solution to work it must be invisible or intuitive to the end user. He or she must not have to juggle multiple tokens, or perform elaborate calculations in her head, or have to require lots of separate apps, accounts or wallets. And they don’t want to see lots of real money locked up in tokens. I can’t help coming away from reading some of these white papers (the conventional way these days to explain how these blockchains and tokens might work) and feeling there might well be a simpler way of doing things.

Blockchain is often mentioned in the same breath as the invention of the internet. That could be true. I would say that for it to be successful it must be closer in analogy to the invention of the World Wide Web — when Sir Tim Berners-Lee came up with a simple layer of links embedded in a familiar text-and-graphic interface which unlocked the potential of the plain vanilla and impenetrable internet. Until the blockchain is able to offer that, talk of its disruptive power in media is premature.

Of course, I might be wrong. Efforts like Civil hope to build a whole ecosystem — a platform encompassing many of the features I’ve described — and are already building a portfolio of news organisations. They describe it as a Netflix strategy — instead of waiting for someone else to aim big, they’re doing it themselves. And Yours Inc’s Charles points to his company’s buy button, seamlessly woven into any webpage, that would allow anyone with Bitcoin Cash to pay a user for his content, in the same way we click on the Facebook like button now. So there is traction, of sorts.

Platforms and standards

Most of the startups I spoke to are keen to point out that they’re not pursuing blockchain technology blindly, and ignoring other technologies — like artificial intelligence, for example. Inkrypt’s co-founder and CEO Muhammad Ali Chaudhary, for example, says: “It is important to realize that blockchain ledgering is just one piece, albeit a very necessary one, of the technical solution being provided by Inkrypt. We are implementing blockchain technology in a particular way for a specific use case and at the end of the day we are a media tech company, as opposed to a ‘blockchain’ company.”

For sure, there will come a time when these companies decide it’s better not to even use the word blockchain to describe what they’re doing. And I think we’ll see some quietly disappear when they realise that journalism is for most people both a passion and a job, and that it might be hard to build a critical mass of journalists and content creators willing to be guinea pigs for untried and untested business models.

What I think needs to happen in the longer term is that independent media, organisations, or funders should work on building standards and platforms that allow all these tokenised initiatives to cooperate. We are some way from a world where people will be comfortable with handling lots of different tokens, and it feels like a reverse if we push users in that direction. Better would be to encourage interchangeability — say an exchanges where you can easily buy and transfer your tokens. Or, where one token rules all. In that sense, companies like Yours.org may have a head start — building APIs, or application programming interfaces, software that allows services to talk to each other — for other content makers to plug into the Yours.org platform.

Ultimately though, I am optimistic that out of all these spaghetti-like flowcharts might emerge a model for media to find a better way of rewarding great content, keeping advertisers happy, and tapping into loyal audiences. I just don’t think we’re quite there yet.

]]>
Jeremy Wagstaff <![CDATA[Journalists are mobile warriors: we should upgrade our kit]]> https://www.kbridge.org/?p=3030 2018-07-18T14:50:14Z 2018-07-18T14:45:38Z  

I’ve been a nomad worker for some time. And I’m shocked at how few journalists seem to be prepared for mobile working. So I thought I’d offer a few tips.

If you can afford it, buy your own equipment.

I’ve been buying my own laptop for nearly 30 years, and while it brings pain to my pocket, I’d never dream of relying on my company’s equipment. In the old days it was because they were too slow and cumbersome, but nowadays it’s mainly because of compliance issues: restrictions on what software you can put on your laptop, as well as what the company is allowed to do and view on its hardware. I would rather retain control over how I organise my information and what apps I use.

Buy your own software.

I’m admittedly a bit of a software addict. (I think it’s probably a thing, I haven’t checked.) But there’s a reason for it: we spend most of our day at our computers, so it makes sense to find the software that best helps you. And with journalists, that’s a broad array of tasks: if you’re a freelance, you want to be measuring your word count and timing how long you’re spending on something. If you’re writing a lot then you want an app that looks aesthetically pleasing (I can’t stand Microsoft Word, and hate it when I see journalists writing stories in it, but that’s me). Then there’s how you collect and store information, be it from the net or from interviews. It needs to go somewhere and it needs to be easily retrievable when you want to write. More on this another time.

Get a decent mouse.

There’s a guy in my co-working space that still uses his Macbook touchpad, that rectangle near the keyboard, to move the mouse around. Very few people are adept at this, so it’s painful to watch guys like my co-worker waste hours a day scrambling around. Buy a mouse. Really. They’re cheap — you can even get a bluetooth one for less than $50 these days, so you don’t even need to take up a USB port. I guarantee it will save you an hour a day.

Save your own neck.

Mobile journalism can mean standing up, moving around, but most of the time it means bringing enough equipment with you to be able to work away from the office — a hotel room, a conference centre, or whatever. This is where I see far too many people hunched over a laptop, looking like Scrooge on Boxing Day. The problem with laptops is they weren’t designed for posture. But you can fix that, with a $20 stand. These are light, foldable, and lift the screen up to a height closer to eye level, which is where it should be. You’ll need to bring an external keyboard with you, but they’re cheap and light too, and your chiropractor will thank you.

While you’re at it get a second screen.

Here’s another tip: Laptop screens are too small to store more than what you’re writing on. If all your source mverkkorahaterial is also stored on your computer, then you’ll need a second screen. You likely have dual monitors in the office, so just because you’re on the road, why should you deny yourself that luxury? There are some good cheap monitors that don’t even require a power supply — plug them into your USB port and they’ll draw the power from there. For several years I had a AOC monitor, which was basic but did the job. I recently upgraded to an Asus monitor which is a beauty, and has made me much more productive and the envy of my co-workers — even the guy fiddling around with the touchpad.

A word of warning to Mac users: recent updates to their operating system have broken the drivers necessary to get the most out of these second screens, but there is a workaround that half fixes it. Email me if you need help.

Be safe.

Being mobile with cool equipment does leave you vulnerable to theft, either financially or politically motivated. Don’t take your main laptop with you to places like China. Have a cheap backup laptop with just the bare essentials on it. Always put your laptop in the room safe, and, if you want to be super clever, buy a small external USB drive to store any sensitive data on, and keep that in your pocket. Samsung do some nice, SSD (solid state, and hence smaller, faster) drives, the latest called T3. I attach mine to the laptop with velcro and then remove it and put it in my pocket when I’m heading off to dinner.

Stay connected.

Don’t trust other people’s wifi. Bring your own. I have a wifi modem, still 3G, which does me fine. Buy a local data SIM card and fire it up. Everyone in your team now has internet access — and the bad guys sniffing the free hotel or coffee shop wireless network will be frustrated.

Finally, stay cool.

By far the most popular thing in my mobile toolkit is a USB fan. Most conference venues are either too hot or too cold, and it’s amazing what a $2 fan can do.

 

]]>
Jeremy Wagstaff <![CDATA[Beyond the S Curve]]> https://www.kbridge.org/?p=3005 2018-08-20T08:11:15Z 2018-07-04T07:28:32Z
By Jasveer10 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons

Mary Meeker. Photo Credit: Jasveer10 [CC BY-SA 4.0] from Wikimedia Commons

Venture capitalist Mary Meeker has been presenting her deck on internet trends for a few years now. Twenty-three, to be precise. They’re good, albeit lengthy, always thought-provoking. And each year I see if I can use her data to tell different stories from the ones she tells about what’s going on. This time I’d thought I’d take a look at her slides from a media perspective. I’m not saying these things would happen, but I think they might. And I think Ms Meeker’s data support my conclusions.

 

Slide 186 is simple enough: global shipments of smartphones by year, from 2007 until last year. It’s the decade when everything changed, when our computers were replaced by devices many times smaller, and when everything became mobile. The key thing from that chart is that it’s s-shaped, meaning it starts out slow, rises precipitously, before levelling out. In short: We bought no more smartphones in 2017 than we did in 2016. The S-curve was discovered by Richard Foster in 1985 and made famous by Clayton Christensen, who invented the term ‘disruptive innovation.’

The key thing here is that we’re are at that levelling out part. That’s when both Foster and Christensen predict disruptive things happen. Foster called discontinuities, Christensen called it disruption, but it amounts to the same thing: other companies, peddling other technologies, products, innovations or platforms, are poised to steal a march on the incumbents and leave them by the side of the road. But what?

Well. If much of the past decade has been driven by smartphones, and it has, then we’re near the end of the smartphone era. It’s been an interesting ride since 2007/8, but shipments tailed off in 2016, and my interest in what the new Galaxy or iPhone might be able to do tailed off about then too. That means uncertain times, as incumbents search for new technologies, new efficiencies to ward off newcomers, and the newcomers experiment with a disruption that works. I believe the future will have to be beyond smartphones, to the point where we don’t need to interact with them at all and will stop treating them (and fetishising them) as prized objects. That, of course, is some way off. But it will come.

For now though, there are some interesting opportunities, especially for the makers of content.

The first one is this: Apple won the hardware value war, but has probably lost the peace. Consider the following, all taken from Meeker’s data:

  • Other operating systems than iOS and Android have disappeared for the first time (slide 6). The platforms are now clear: Android will not be forked and owned by any hardware maker. (When did you last hear of Tizen in a phone?) Nor will any other challenger survive. There is absolutely no point in trying to build a new operating system for the phone. For other devices, maybe.
  • Google’s Android has maintained market dominance: three-quarters of all smartphones shipped last year ran Android. You would think that as the average selling price of phones increases, high-end Android devices would succumb to the more flashy iPhones. Why not finally get that iPhone you’ve been dreaming of. But people don’t. Why? It’s probably because Apple phones are still significantly more expensive, meaning that the shift would usually be to one of the older, cheaper, discontinued, sometimes refurbished, models. (A significant chunk of iPhone users are those on older devices.) In status-conscious places like China, that’s not an acceptable switch. Better a new model of a lesser brand, now that those brands are pretty nice looking: think Huawei, Xiaomi, Samsung. Bottom line: as phones go into a replacement cycle, more and more high-end rollers are going to be on Android.

So. What does this mean for media and content producers? I believe it represents an opportunity. As the market for hardware slows — fewer people buying new phones, more people taking longer to replace their old ones — more money is freed up to be spent elsewhere in the ecosystem: on software and services, in-app subscriptions, purchases etc. Apple has traditionally benefited more from this — iOS users spend more in app stores and in-app purchases than Android users (per download a user spends $1.5, as opposed to about 30 cents per downloaded app for Android users, according to my calculation of App Annie data for Q1 2018.) But this gap is narrowing: consumer spend on Google Play grew 25% that quarter, against 20% on the iOS store.In other words: Despite the obvious growing affluence of many Android users, the operating system is still ignored by several key media constituencies — the most obvious of which is podcasts, which are still mostly the domain of iOS users, because Google has been late to make it a core feature of Android. That is changing, offering a window of opportunity. Any effort in focusing on Android is likely to have benefits, because as an OS it clearly isn’t going anywhere, and despite the fragmentation within Android, there’s still huge markets to win over. Don’t ignore the Droid!

This is part of a bigger picture, a larger shift for the main players as markets get saturated. All the big tech players are competing increasingly on the same field. While part of it is what I would call equipment (hardware and software) most of it is going to be over what you use that equipment for. As Ms Meeker points out:

  • Amazon is (also) becoming an ad platform, sponsoring products on its websites and apps
  • Google is (also) becoming a commerce platform (via Google Home ordering)

You might add to that

  • Netflix, Google, Amazon, Apple are all creating content.

Everyone is trying to do everything because they can’t afford not to.

All recognise that the future lies not in hardware, or software, or even platforms, but in stacking the shelves of those platforms. This is not, per se, about e-commerce, but in being the place where people live within which that e-commerce — that buying, subscribing, consuming — takes place. The most obvious example of this is the voice-assistant — Google’s Home or Amazon’s Alexa. These are spies in the house of love: devices that become part of the family, learning your wishes and habits obediently and trying to anticipate them.

It’s artificial intelligence geared towards understanding, anticipating and satisfying your inner selves.

For makers and purveyors of content, the challenge is going to be to understand this shifting playing field. Somehow you need to elbow your way into one of these channels and provide a service that fits their model. Obvious targets would be to ensure you have a ‘skill’ on Alexa’s platform, where users can easily activate your news service over others. But deeper thinking may yield other opportunities — spelling games for kids that leverage your content, etc. I’ll talk more about these opportunities in a future column, and would love to hear your ideas and experiences.


Watch Mary Meeker’s report keynote from the 2018 Code Conference

 

]]>