Category: Resources

Curate datasets for fun, and profit

Cover photo source: raul kalvo

The term “Curator” was traditionally used in the context of museums, library, gallery and art exhibitions. It general refers to the person who creatively plan and well organise resources to maximise the utility for the audience. The process that a curator gets the job done is thus “curate”.

TLDR readers, here are the sites that worth your visit:

Continue reading “Curate datasets for fun, and profit”

活用多個數據庫做企業背景調查:一篇民間調查的方法解析

原文:我猜你们一定很想了解一下红黄蓝

隨著北京紅黃藍幼兒園的虐童事件的不斷發酵,微博熱搜一度出現:「三種顏色不能上熱搜」熱門話題。中國社交媒體的兩大平台「微博」和「微信公眾號」的網絡大V們仍舊不斷在針對此次事件从不同角度發表各種文章。這次小編帶大家看看一個經常以吐槽科技公司及其產品的科技自媒體「差評君」,在第一時間發表了的一篇針對「紅黃藍幼兒園經營背景」的調查,獲得超過十萬加阅读量的微信公眾號文章。

這篇文章通過網絡公開資料,以熟練運用各種搜索工具作為主要手段,呈現了一個數據調查報道的成功案例。小編進行「逆向工程」,帶大家分析一下這篇文章中使用到的數據庫和調查手段。

Continue reading “活用多個數據庫做企業背景調查:一篇民間調查的方法解析”

Learn Spreadsheet to Mine Data and Jumpstart Your Data Journalism Career – A Sharing by Aimee Edmondson

Aimee Edmondson is now an Associate Professor with Scripps School of Journalism, Ohio University. HKBU students are very lucky to have this knowledgeable and passionate speaker to talk about data journalism this afternoon. Her 12 years in reporting and later acquired statistics and technology are a fine combination for a data journalist. In the world where people are too fascinated by new technology and numerous boot camps are created by non-journalists, Aimee can be a role model for those “traditional journalists” who are moving in this direction.

Why does data matter? In Aimee’s words, you want to be a reporter, not a repeater. Data helps one to verify what the source is saying and find out what is really happening. To be pragmatic, we are seeing more and more JD requiring data analytics skills from investigative reporters. Going beyond the journalism domain, the skills trained by data journalism can well fit into corporate communication, public relation and advertising industry.

Picture: Job boards on IRE, from the slides

To start, one only needs to work on “small data”, with a spreadsheet.

Continue reading “Learn Spreadsheet to Mine Data and Jumpstart Your Data Journalism Career – A Sharing by Aimee Edmondson”

Data News of the Week | e-waste in Hong Kong

Cover photo credit: Monitour Project

We have a special edition for DNW this week dedicated to e-waste in Hong Kong. The notes are derived from a seminar plus brainstorm session with researchers from CUHK, HKBU, PolyU, Lingnan U, activists from Land Justice, Open Data Hong Kong, CODE4HK. This is a quick note from memory, so evidence/ statistics/ figures quoted in this note need further verification before you use them. There are enough pointers for the reader to go back the source and find direct contacts.

The news points to follow

E-waste refers to the abandoned Electric and Electronic Equipments (EEE). With the booming of ICT industry, we are witnessing more and more e-waste these days. Why should you care? Let’s cut through the news points first:

  • 75% e-waste is disappeared, as Green Peace estimates. It collects data of EEE production and calculates expected e-waste according to the lifespans of devices. Comparing this with the e-waste collection data from formal government bodies, we can see a 75% gap, meaning those are lost track
  • 97.7% e-waste in Hong Kong goes to unknown channels (figure in 2009; may change due to new recycling plant; government is trying to increase supervised channels). This may signal a large number of illegal operation, but not necessary all illegal.
  • Hong Kong used to import a large volume of e-wastes given the loophole in the legislations. Those e-wastes went to mainland China for processing. The export to China was disrupted at 2015.
  • Yards/ factories/ workshops that collect, process and dump e-wastes exist in many remote locations in Hong Kong, especially New Territory. Those locations are not easily accessible, protected by “private lands” and “gangs”, as put by Land Justice investigators.
  • Many workers in those yards are illegal immigrants, for example from mainland and South East Asia. They usually work without proper protective measures.

Continue reading “Data News of the Week | e-waste in Hong Kong”

Data News of the Week | Paradise Papers

Do you still remember the massive Panama Paper leak in 2016? When 13.4 million financial documents were released in this November, the offshore paradise islands got global attention again. Paradise Papers cover the time period from 1950 to 2016, including the more than 120,000 people and 25,000 offshore companies.

Tech-savvy readers can jump to the database directly. Like before, the dataset is modelled as a graph, namely treating the Officers, Intermediaries and Addresses as nodes and their relationships as links. Neo4j is one widely adopted graph database. Its web user interface, called “neo4j browser”, allows journalists to visually expand and explore a graph. The query language “Cypher” is a superset of relational query (SQL), full-text search and graph pattern matching. Its flexibility and built-in graph algorithms allow experienced journalists to systematically study the underlying graph. The download page on ICIJ includes snapshots of four neo4j databases exported in CSV format.

Continue reading “Data News of the Week | Paradise Papers”

Data News of the Week | North Korea Tensions

“North Korea” or “Democratic People’s Republic of Korea (DPRK)” are recurrent and frequent headlines in the newspapers. The recent advances in missile technology and nuclear tests threatens the world and creates a lot of geopolitical tensions. Our editor would like to share relevant data projects this week.

The “wholesale” packages

Assuming you are too busy to study all the background information and catch up the latest news, here are two must-read projects that get you up to date in 30 minutes.

☞ Immersive reporting from ESRI StoryMaps: side by side comparison of two Koreas in multiple angles [Link]

image2 Continue reading “Data News of the Week | North Korea Tensions”

Embedding interactive rich media on WordPress

Source: Wiki Commons

There are a lot “one-click” tools available online that help you to create good visualisation and export to iframe for embedding into your site. Good use of those tools can better present your content to the readers. Note that the free version of WordPress hosted service does not allow embedding iframe, so they can only rely on shortcodes. For example, one can use is  to embed interactive charts generated from Google Sheets. See more options of available shortcodes for free version here

Data and News Society is operated on a paid plan so we installed the iframe plugin. This makes it possible to enable a wide range of 3rd party visualisation into your project. This tutorial is contributed by Jade Li to demo how to embed interactive content from several common tools. The general workflow is to first export the 3rd party project as iframe, find the URL in the src=”” section, and use  [ iframe src=”” ] to embed it into WordPress.

Continue reading “Embedding interactive rich media on WordPress”

Recap of Oct 2017 Data Journalism Bootcamp in HKBU

The 2-day Data Journalism Boot Camp was successfully held in HKBU on Oct 26 and Oct 27. The event was sponsored by KAS and the workshop sessions were led by two experienced trainers from DataLEADS. Another highlight of the event was a roundtable discussion chaired by Prof. Ying Chen, where professionals shared their practices, challenges and solutions in the newsrooms.

Data Bootcamp in Oct 2017

Continue reading “Recap of Oct 2017 Data Journalism Bootcamp in HKBU”

wget最簡爬蟲:一行命令助攻調查記者

書寫爬蟲已經成爲數據記者的必備技能。雖然有諸如ScrapingHub、Morph、ParseHub等在線服務,可以一定程度上實現無代碼抓取網頁,但很多時候,還是需要手動編寫爬蟲邏輯。爬蟲書寫分爲兩個部分,第一個是爬,第二個是取。「爬」即是從一個網頁出發,找到它所包含的鏈接,逐一訪問,不斷重複這個過程,最終收穫到需要的頁面。這個過程和人們瀏覽網頁是類似的,有種「順藤摸瓜」的意思。「取」則是從網頁中提取有效信息的過程,將「半結構化」的網頁,轉換爲「結構化」的數據表格。

本文介紹最簡單的爬蟲,只需要一行命令: wget -r

Continue reading “wget最簡爬蟲:一行命令助攻調查記者”

利用Tableau的JOIN功能篩選完整數據片段

做數據新聞經常會需要處理大量缺失數據(Missing Data)。如果原始數據是一張二維表格,那麼這張表格中有很多「空洞」,我們常常希望過濾掉這些「空洞」,留下整行整列,以便在一個限定的範圍內,進行完整的分析工作。

本文來自同學Zoya的投稿,目的是用地圖展示各個國家市政垃圾收集的數量。原始數據來自UN Municipal Waste Collection Dataset,年份覆蓋並不完全(Missing Data)。爲了統一對比標準,項目最終選擇篩選出2002到2012年(共11年)均有數據的國家,再繪製地圖。本教程展示了兩種方法,均有值得借鑑的技巧。法一組合利用Excel、Open Refine、Tableau的基礎功能,最後使用Tableau的JOIN操作,實現了缺失數據的過濾。法二則針對本用例的特殊性,直接在Tableau內部完整整個數據流,用到了「# of Records」這個特殊的計算量。

1

圖:原始數據截圖

Continue reading “利用Tableau的JOIN功能篩選完整數據片段”