♊️ GemiNews 🗞️
🏡
📰 Articles
🏷️ Tags
🧠 Queries
📈 Graphs
☁️ Stats
💁🏻 Assistant
Demo 1: Embeddings + Recommendation
Demo 2: Bella RAGa
Demo 3: NewRetriever
Demo 4: Assistant function calling
Editing article
Title
Summary
Content
<h3>Processing 10TB of Wikipedia Page Views — Part 2</h3><h3>Data Visualization</h3><p>This is the second of a two part series in which we focus on interesting queries and visualizations using the data pipeline we created in part one.</p><p>In <a href="https://medium.com/@marcacohen/processing-10tb-of-wikipedia-page-views-part-1-b984d8ebe146">part one</a>, we covered the problem statement and the data engineering solution, including all the code needed to build a reliable, robust big data processing pipeline. In this part two, we’ll cover some of the fun things we can do once we have our data pipeline running: interesting queries and data visualizations and a Data Studio dashboard you can try yourself.</p><h3>What can we do with this data?</h3><p>For starters, let’s find out the most popular Wikipedia article so far this year:</p><pre>SELECT title, SUM(views) total_views <br> FROM `bigquery-public-data.wikipedia.pageviews_2020` <br>WHERE wiki IN ('en', 'en.m') <br> AND title NOT IN ('Main_Page','-','Wikipedia')<br> AND title NOT LIKE 'File%' <br> AND title NOT LIKE 'Special:%' <br> AND title NOT LIKE 'Portal:%' <br> AND datehour>='2020-01-01' <br> AND views>10000 <br>GROUP BY title <br>ORDER BY 2 DESC LIMIT 10</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/736/0*4hAkF4gIf7ywHm-D.png" /></figure><h3>It’s only rock & roll, but I like it</h3><p>That last query was interesting but it doesn’t take advantage of the entity data we worked so hard to process in part 1. Let’s construct an example leveraging our wikidata table to find the most viewed pages for rock bands in 2020. Of course, this begs the question, what is a rock band? Wikidata to the rescue!</p><p>You can search the wikidata interactively via the wikidata site, like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*KfBEsxFKc9O3Cuek.png" /></figure><p>This shows us that “rock band” matches multiple entities, including the video game of the same name. The entity we’re after is the first one: 5741069. One way to confirm this finding is to search for a known entity that should be in this category:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/0*-OYfCkstAhM2C3us.png" /></figure><p>Here we see that the preeminent rock band, The Beatles, is indeed classified as an instance of “Rock Band”. But this doesn’t catch every page I’m interested. For example, Radiohead is considered an instance of “Musical Group” (215380).</p><p>Armed with those two entity ids, we can now do some queries about popular bands. But I want to do one more thing before we start querying. Since our scope is limited to just those two entities, it’s wasteful to search the full 10TB dataset on every query. Wouldn’t it be nice if there was a way to limit our search to include only the pages we care about? Well, there is — we’ll create what BigQuery calls a view, which will limit our query scope to only the view counts for pages about bands.</p><p>Here’s the SQL code to create my view, which I’ve made public so you can try it too:</p><pre>CREATE OR REPLACE TABLE `mco-bigquery.wikipedia.bands`<br>(datehour TIMESTAMP, title STRING, views INT64) <br>PARTITION BY DATE(datehour)<br>CLUSTER BY title <br>AS <br> SELECT datehour, title, SUM(views) views<br> FROM `bigquery-public-data.wikipedia.pageviews_*` a<br> JOIN (<br> SELECT DISTINCT en_wiki<br> FROM `bigquery-public-data.wikipedia.wikidata`<br> WHERE EXISTS (SELECT * FROM UNNEST(instance_of) WHERE numeric_id=5741069 or numeric_id=215380)<br> AND en_wiki IS NOT null <br> ) b <br>ON a.title=b.en_wiki<br>AND a.wiki='en' <br>AND DATE(a.datehour) BETWEEN '2015-01-01' AND '2020-12-31' <br>GROUP BY datehour, title</pre><p>This view gives us a dataset we can query much more quickly and economically, because we’re only scanning information associated with bands, which is a small subset of the overall dataset.</p><p>Let’s find the most wiki-popular band so far in 2020 with this query:</p><pre>SELECT title, SUM(views) views <br>FROM `mco-bigquery.wikipedia.bands` <br>WHERE DATE(datehour) BETWEEN "2020-01-01" AND "2020-12-31" <br>GROUP BY title <br>ORDER BY views DESC <br>LIMIT 10</pre><p>And the results as of this writing…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/510/0*K5c2m4fVSfCaFgoW.png" /></figure><h3>Let’s make a dashboard!</h3><p>This is fun but:</p><ol><li>It’s a snapshot of a single moment — it doesn’t give us any sense of how this data varies over time.</li><li>SQL queries can be a bit complicated, especially for beginners.</li><li>This query is interactive for me but I’d like to somehow share that interactivity with others.</li></ol><p>Wouldn’t it be nice if I could easily, without writing a single line of code, allow everyone to query this data, with mouse clicks rather than SQL queries, and to see the results in a nice color coded time series graph?</p><p>Using <a href="https://datastudio.google.com/overview">Google Data Studio</a>, I made just such a dashboard, which I’ve embeded below. Give it a try — you can play with the selected bands and the time frame you’d like to analyse.</p><p>For example, I wonder which band, during the last five years, was more popular: the Beatles or the Stones? We can use this dashboard to find out in just a few seconds:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BDsdoG9_Uj8csp1n.png" /></figure><p>Despite having ended their career fifty years ago, the Beatles continue to gather a pretty impressive level of attention in Wikipedia page views.</p><h3>Now it’s your turn</h3><p>I’ve made this data available to everyone in the BigQuery Public Dataset collection. The page views are coming in roughly every hour and the entity data gets refreshed every 3–4 days. The data can be found in the bigquery-public-data collection, under Wikipedia, as shown below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Y0WQm06l-plYB90i.png" /></figure><p>These tables are partitioned so you can save time and cost by time limiting your queries, like this:</p><pre>SELECT title, SUM(views) views FROM `bigquery-public-data.wikipedia.pageviews_2019` WHERE DATE(datehour) BETWEEN "2019-01-01" AND "2019-12-31" AND wiki = 'en' GROUP BY title ORDER BY views DESC LIMIT 20</pre><p>The “Battle of the Bands” dashboard is also available for your use at <a href="https://mco.fyi/bands">mco.fyi/bands</a>.</p><h3>Resources and Acknowledgements</h3><ul><li><a href="https://medium.com/@marcacohen/processing-10tb-of-wikipedia-page-views-part-1-b984d8ebe146">Part 1</a> of this series</li><li>All of the code described in this article can be found here: <a href="https://github.com/marcacohen/wikidata">github.com/marcacohen/wikidata</a>.</li><li>Many thanks to <a href="https://medium.com/@hoffa">Felipe Hoffa</a>. Several of the queries and data management techniques in this article were derived from Felipe’s excellent articles on <a href="https://medium.com/google-cloud/bigquery-lazy-data-loading-ddl-dml-partitions-and-half-a-trillion-wikipedia-pageviews-cd3eacd657b6">partitioning and lazy loading</a>, and <a href="https://medium.com/google-cloud/bigquery-optimized-cluster-your-tables-65e2f684594b">clustering</a>.</li><li><a href="https://twitter.com/shanecglass">Shane Glass</a> provided invaluable support helping me add this data to the wonderful <a href="https://cloud.google.com/public-datasets">Google Cloud Public Datasets</a> collection.</li></ul><p><em>Originally published at </em><a href="https://mco.dev/processing-10tb-of-wikipedia-page-views-part-2/"><em>https://mco.dev</em></a><em> on March 21, 2020.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=48352059fd44" width="1" height="1" alt="">
Author
Link
Published date
Image url
Feed url
Guid
Hidden blurb
--- !ruby/object:Feedjira::Parser::RSSEntry title: Processing 10TB of Wikipedia Page Views — Part 2 url: https://marcacohen.medium.com/processing-10tb-of-wikipedia-page-views-part-2-48352059fd44?source=rss-1f225b5f22b2------2 author: Marc Cohen categories: - google-cloud-platform - bigquery - google-data-studio published: 2020-03-21 00:00:00.000000000 Z entry_id: !ruby/object:Feedjira::Parser::GloballyUniqueIdentifier is_perma_link: 'false' guid: https://medium.com/p/48352059fd44 carlessian_info: news_filer_version: 2 newspaper: Marc Cohen - Medium macro_region: Blogs rss_fields: - title - url - author - categories - published - entry_id - content content: '<h3>Processing 10TB of Wikipedia Page Views — Part 2</h3><h3>Data Visualization</h3><p>This is the second of a two part series in which we focus on interesting queries and visualizations using the data pipeline we created in part one.</p><p>In <a href="https://medium.com/@marcacohen/processing-10tb-of-wikipedia-page-views-part-1-b984d8ebe146">part one</a>, we covered the problem statement and the data engineering solution, including all the code needed to build a reliable, robust big data processing pipeline. In this part two, we’ll cover some of the fun things we can do once we have our data pipeline running: interesting queries and data visualizations and a Data Studio dashboard you can try yourself.</p><h3>What can we do with this data?</h3><p>For starters, let’s find out the most popular Wikipedia article so far this year:</p><pre>SELECT title, SUM(views) total_views <br> FROM `bigquery-public-data.wikipedia.pageviews_2020` <br>WHERE wiki IN ('en', 'en.m') <br> AND title NOT IN ('Main_Page','-','Wikipedia')<br> AND title NOT LIKE 'File%' <br> AND title NOT LIKE 'Special:%' <br> AND title NOT LIKE 'Portal:%' <br> AND datehour>='2020-01-01' <br> AND views>10000 <br>GROUP BY title <br>ORDER BY 2 DESC LIMIT 10</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/736/0*4hAkF4gIf7ywHm-D.png" /></figure><h3>It’s only rock & roll, but I like it</h3><p>That last query was interesting but it doesn’t take advantage of the entity data we worked so hard to process in part 1. Let’s construct an example leveraging our wikidata table to find the most viewed pages for rock bands in 2020. Of course, this begs the question, what is a rock band? Wikidata to the rescue!</p><p>You can search the wikidata interactively via the wikidata site, like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*KfBEsxFKc9O3Cuek.png" /></figure><p>This shows us that “rock band” matches multiple entities, including the video game of the same name. The entity we’re after is the first one: 5741069. One way to confirm this finding is to search for a known entity that should be in this category:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/0*-OYfCkstAhM2C3us.png" /></figure><p>Here we see that the preeminent rock band, The Beatles, is indeed classified as an instance of “Rock Band”. But this doesn’t catch every page I’m interested. For example, Radiohead is considered an instance of “Musical Group” (215380).</p><p>Armed with those two entity ids, we can now do some queries about popular bands. But I want to do one more thing before we start querying. Since our scope is limited to just those two entities, it’s wasteful to search the full 10TB dataset on every query. Wouldn’t it be nice if there was a way to limit our search to include only the pages we care about? Well, there is — we’ll create what BigQuery calls a view, which will limit our query scope to only the view counts for pages about bands.</p><p>Here’s the SQL code to create my view, which I’ve made public so you can try it too:</p><pre>CREATE OR REPLACE TABLE `mco-bigquery.wikipedia.bands`<br>(datehour TIMESTAMP, title STRING, views INT64) <br>PARTITION BY DATE(datehour)<br>CLUSTER BY title <br>AS <br> SELECT datehour, title, SUM(views) views<br> FROM `bigquery-public-data.wikipedia.pageviews_*` a<br> JOIN (<br> SELECT DISTINCT en_wiki<br> FROM `bigquery-public-data.wikipedia.wikidata`<br> WHERE EXISTS (SELECT * FROM UNNEST(instance_of) WHERE numeric_id=5741069 or numeric_id=215380)<br> AND en_wiki IS NOT null <br> ) b <br>ON a.title=b.en_wiki<br>AND a.wiki='en' <br>AND DATE(a.datehour) BETWEEN '2015-01-01' AND '2020-12-31' <br>GROUP BY datehour, title</pre><p>This view gives us a dataset we can query much more quickly and economically, because we’re only scanning information associated with bands, which is a small subset of the overall dataset.</p><p>Let’s find the most wiki-popular band so far in 2020 with this query:</p><pre>SELECT title, SUM(views) views <br>FROM `mco-bigquery.wikipedia.bands` <br>WHERE DATE(datehour) BETWEEN "2020-01-01" AND "2020-12-31" <br>GROUP BY title <br>ORDER BY views DESC <br>LIMIT 10</pre><p>And the results as of this writing…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/510/0*K5c2m4fVSfCaFgoW.png" /></figure><h3>Let’s make a dashboard!</h3><p>This is fun but:</p><ol><li>It’s a snapshot of a single moment — it doesn’t give us any sense of how this data varies over time.</li><li>SQL queries can be a bit complicated, especially for beginners.</li><li>This query is interactive for me but I’d like to somehow share that interactivity with others.</li></ol><p>Wouldn’t it be nice if I could easily, without writing a single line of code, allow everyone to query this data, with mouse clicks rather than SQL queries, and to see the results in a nice color coded time series graph?</p><p>Using <a href="https://datastudio.google.com/overview">Google Data Studio</a>, I made just such a dashboard, which I’ve embeded below. Give it a try — you can play with the selected bands and the time frame you’d like to analyse.</p><p>For example, I wonder which band, during the last five years, was more popular: the Beatles or the Stones? We can use this dashboard to find out in just a few seconds:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BDsdoG9_Uj8csp1n.png" /></figure><p>Despite having ended their career fifty years ago, the Beatles continue to gather a pretty impressive level of attention in Wikipedia page views.</p><h3>Now it’s your turn</h3><p>I’ve made this data available to everyone in the BigQuery Public Dataset collection. The page views are coming in roughly every hour and the entity data gets refreshed every 3–4 days. The data can be found in the bigquery-public-data collection, under Wikipedia, as shown below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Y0WQm06l-plYB90i.png" /></figure><p>These tables are partitioned so you can save time and cost by time limiting your queries, like this:</p><pre>SELECT title, SUM(views) views FROM `bigquery-public-data.wikipedia.pageviews_2019` WHERE DATE(datehour) BETWEEN "2019-01-01" AND "2019-12-31" AND wiki = 'en' GROUP BY title ORDER BY views DESC LIMIT 20</pre><p>The “Battle of the Bands” dashboard is also available for your use at <a href="https://mco.fyi/bands">mco.fyi/bands</a>.</p><h3>Resources and Acknowledgements</h3><ul><li><a href="https://medium.com/@marcacohen/processing-10tb-of-wikipedia-page-views-part-1-b984d8ebe146">Part 1</a> of this series</li><li>All of the code described in this article can be found here: <a href="https://github.com/marcacohen/wikidata">github.com/marcacohen/wikidata</a>.</li><li>Many thanks to <a href="https://medium.com/@hoffa">Felipe Hoffa</a>. Several of the queries and data management techniques in this article were derived from Felipe’s excellent articles on <a href="https://medium.com/google-cloud/bigquery-lazy-data-loading-ddl-dml-partitions-and-half-a-trillion-wikipedia-pageviews-cd3eacd657b6">partitioning and lazy loading</a>, and <a href="https://medium.com/google-cloud/bigquery-optimized-cluster-your-tables-65e2f684594b">clustering</a>.</li><li><a href="https://twitter.com/shanecglass">Shane Glass</a> provided invaluable support helping me add this data to the wonderful <a href="https://cloud.google.com/public-datasets">Google Cloud Public Datasets</a> collection.</li></ul><p><em>Originally published at </em><a href="https://mco.dev/processing-10tb-of-wikipedia-page-views-part-2/"><em>https://mco.dev</em></a><em> on March 21, 2020.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=48352059fd44" width="1" height="1" alt="">'
Language
Active
Ricc internal notes
Imported via /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/import-feedjira.rb on 2024-03-31 23:41:10 +0200. Content is EMPTY here. Entried: title,url,author,categories,published,entry_id,content. TODO add Newspaper: filename = /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/../../../crawler/out/feedjira/Blogs/Marc Cohen - Medium/2020-03-21-Processing_10TB_of_Wikipedia_Page_Views — Part_2-v2.yaml
Ricc source
Show this article
Back to articles