Publisher's Synopsis
Do you want to gain a deeper understanding of how big tech analyses and exploits our text data, or investigate how political parties differ by analysing textual styles, associations and trends in documents? Or create a map of a text collection and write a simple QA system yourself?
This book explores how to apply state-of-the-art text analytics methods to detect and visualise phenomena in text data. Solidly based on methods from corpus linguistics, natural language processing, text analytics and digital humanities, this book shows readers how to conduct experiments with their own corpora and research questions, underpin their theories, quantify the differences and pinpoint characteristics. Case studies and experiments are detailed in every chapter using real-world and open access corpora from politics, World English, history, and literature. The results are interpreted and put into perspective, pitfalls are pointed out, and necessary pre-processing steps are demonstrated. This book also demonstrates how to use the programming language R, as well as simple alternatives and additions to R, to conduct experiments and employ visualisations by example, with extensible R-code, recipes, links to corpora, and a wide range of methods. The methods introduced
can be used across texts of all disciplines, from history or literature to party manifestos and patient reports.