2020
pdf
bib
Proceedings of the 12th Web as Corpus Workshop
Adrien Barbaresi
|
Felix Bildhauer
|
Roland Schäfer
|
Egon Stemle
Proceedings of the 12th Web as Corpus Workshop
2016
pdf
abs
CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws
Roland Schäfer
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
In this paper, I describe a method of creating massively huge web corpora from the CommonCrawl data sets and redistributing the resulting annotations in a stand-off format. Current EU (and especially German) copyright legislation categorically forbids the redistribution of downloaded material without express prior permission by the authors. Therefore, such stand-off annotations (or other derivates) are the only format in which European researchers (like myself) are allowed to re-distribute the respective corpora. In order to make the full corpora available to the public despite such restrictions, the stand-off format presented here allows anybody to locally reconstruct the full corpora with the least possible computational effort.
pdf
bib
Proceedings of the 10th Web as Corpus Workshop
Paul Cook
|
Stefan Evert
|
Roland Schäfer
|
Egon Stemle
Proceedings of the 10th Web as Corpus Workshop
pdf
bib
Automatic Classification by Topic Domain for Meta Data Generation, Web Corpus Evaluation, and Corpus Comparison
Roland Schäfer
|
Felix Bildhauer
Proceedings of the 10th Web as Corpus Workshop
pdf
On Bias-free Crawling and Representative Web Corpora
Roland Schäfer
Proceedings of the 10th Web as Corpus Workshop
2014
pdf
bib
Proceedings of the 9th Web as Corpus Workshop (WaC-9)
Felix Bildhauer
|
Roland Schäfer
Proceedings of the 9th Web as Corpus Workshop (WaC-9)
pdf
bib
Focused Web Corpus Crawling
Roland Schäfer
|
Adrien Barbaresi
|
Felix Bildhauer
Proceedings of the 9th Web as Corpus Workshop (WaC-9)
2012
pdf
abs
Building Large Corpora from the Web Using a New Efficient Tool Chain
Roland Schäfer
|
Felix Bildhauer
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Over the last decade, methods of web corpus construction and the evaluation of web corpora have been actively researched. Prominently, the WaCky initiative has provided both theoretical results and a set of web corpora for selected European languages. We present a software toolkit for web corpus construction and a set of siginificantly larger corpora (up to over 9 billion tokens) built using this software. First, we discuss how the data should be collected to ensure that it is not biased towards certain hosts. Then, we describe our software toolkit which performs basic cleanups as well as boilerplate removal, simple connected text detection as well as shingling to remove duplicates from the corpora. We finally report evaluation results of the corpora built so far, for example w.r.t. the amount of duplication contained and the text type/genre distribution. Where applicable, we compare our corpora to the WaCky corpora, since it is inappropriate, in our view, to compare web corpora to traditional or balanced corpora. While we use some methods applied by the WaCky initiative, we can show that we have introduced incremental improvements.