Resources
On this page you’ll find links to all sorts of stuff that I have found useful, including tutorials, books, and general reading on R and Praat, statistics, software, corpora, design, and other stuff.
I haven’t really updated this page since about 2019, so it may not include the latest resources. Some links may be dead. I’ve taken this page off my website’s main navigation bar, but I’ll keep it around in case others find it useful.
My handouts, tutorials, and workshops
R Workshops
This is a series of workshops on how to use R which includes a variety of topics. I have included PDFs and additional information on each installment of this series.
Formant extraction tutorial
This tutorial walks you through writing a praat script that extracts formant measurements from vowels. If you’ve never worked with Praat scripting but want to work with vowels, this might be a good starting point.
Vowel plots in R tutorials (Part 1 and Part 2)
This is a multi-part tutorial on how to make sort of the typical vowel plots in R. Part 1 shows plotting single-point measurements as scatter plots and serves as a mild introduction to ggplot2
. Part 2 shows how to plot trajectories, both in the F1-F2 space and in a Praat-like time-Hz space, and is a bit of an introduction to tidyverse
as well.
Measuring vowel overlap in R (Part 1 and Part 2)
This is a two-part tutorial on calculating Pillai scores and Bhattacharyya’s Affinity in R. The first covers what I consider the bare necessities, culminating custom R functions for each. The second is a bit more in-depth as it looks at ways to make the functions more robust, but it also shows some simple visualizations you can make with the output.
Make yourself googleable
I’m no expert, but I have given a workshop on how grad students can increase their online presence and make themselves more googleable, based in large part to ImpactStory’s fantastic 30-day challenge, which you can read here.
Academic Poster Workshop
In response to the need for a “How to Make an Academic Poster” workshop, I put one together last minute. Poster-making is more of an art than a science and this is a very opinionated view on the dos and don’ts of making an academic poster.
Excel Workshop
I once gave a workshop on Excel and ended producing a long handout, that goes from the very basics to relatively tricky techniques. The link above will take you to a blog post that summarizes the workshop, and you can also find the handout itself.
R Resources
Here is a list of resources I’ve found for R. I’ve gone through some of them and others are on my to-do list. These are in no particular order.
General R Coding
The website for Tidyverse is a great go-to place for learning how to use
dplyr
,tidyr
, and many other packages.R for Data Science by Garrett Grolemund & Hadley Wickham is a fantastic overview of tidyverse functions.
Advanced R by Hadley Wickham with the solutions by Malte Grosser, Henning Bumann, Peter Hurford & Robert Krzyzanowski.
R Packages by Hadley Wickham. Also try Shannon Pileggi’s tutorial called Your first R package in 1 hour to see some of these tools in action.
Hands-On Programming with R by Garrett Grolemund & Hadley Wickham for writing functions and simulations. Haven’t read it, but it looks good.
r-statistics.co by Selva Prabhakaran which has great tutorials on R itself, ggplot2, and advanced statistical modeling.
Tidymodels is like the Tidyverse suite of packages, but it’s meant for better handling of many statistical models. Also see it’s GitHub page.
Learn to purrr by Rebecca Barter is the tutorial on purrr that I wish I had.
Modern R with the Tidyverse by Bruno Rodriguez is a work in progress (as of June 2022), but it’s another free eBook that shows R and the Tidyverse.
Easystats “is a collection of R packages, which aims to provide a unifying and consistent framework to tame, discipline, and harness the scary R statistics and their pesky models.”
Oscar Baruffa’s monstrous Big Book of R is your one-stop resource for open-source R books on pretty much any topic. There are hundreds of books!
Working with Text
Text Mining with R by Julia Silge & David Robinson. Haven’t read it, but it looks great.
Handling Strings with R by Gaston Sanchez.
If you use the CMU Pronouncing Dictionary, you should look at the phon package. It makes the whole thing searchable and easy to find rhymes. Personally, this’ll make it a lot easier to find potential words for a word list.
The ggtext package by Claus O. Wilke makes it a lot easier to work with text if you want to add a little bit of rich text to your plots.
RMarkdown, Bookdown, and Blogdown
Note: Now that Quarto is available, some of this material may be out of date.
Elegant, flexible, and fast dynamic report generation with R by Yihui Xie is a great resource for RMarkdown.
R Markdown: The Definitive Guide Yihui Xie, J. J. Allaire, and Garrett Grolemund is the comprehensive guide to R Markdown and Bookdown.
15 Tips on Making Better Use of R Markdown by Yihue Xie offers some very useful and practical tips for getting the most out of RMarkdown. (These are slides from a presentation in 2019.)
bookdown: Authoring Books and Technical Documents with R Markdown by Yihui Xie. See an introduction to Bookdown by RStudio here.
If your love for Zotero is what’s preventing you from using RMarkdown, never fear! Zotero hacks: unlimited synced storage and its smooth use with rmarkdown by Ilya Kashnitsky is the perfect guide to getting those two integrated.
This is an excellent blog post by Rebecca Barter about how to start a blog and what kinds of things to do on it. Becoming an R blogger.
GIS and Spatial Stuff
Spatial and Spatioteporal Data Analysis in R, a workshop Edzer Pebesma, Roger Bivand, and Angela Li at the useR! 2019 conference on Jul 9, 2019.
Geocomputation with R by Robin Lovelace, Jakub Nowosad, and Jannes Muenchow.
R for Geospatial Processing by Nicolas Roelandt.
GIS and Mapping in R: An Introduction to the sf package by Oliver Gimemez.
I’ve needed to do a bivariate cloropleth before, so Timo Grossenbacher’s blog post was helpful because it illustrates what this is and how you can do it in R.
I get all my shape files from the National Historical Geographic Information System (NHGIS) website.
And because I haven’t quite gotten the hang of it yet in R, I do all my mapmaking using the QGIS, the open-source, Mac-friendly, and free alternative to ArcGIS. Shout-out to Meagan Duever of UGA Libraries for teaching me everything I know about GIS.
Working with Census Data
Kyle Walker’s online book Analyzing US Census Data: Methods, Maps, and Models in R.
A Guide to Working with US Census Data in R by Ari Lamstein and Logan Powell is a nice, brief guide to census data and some places to go if you want to work with it in R.
The tidycensus package by Kyle Walker looks really slick and makes it easy to work with census data within the Tidyverse framework. This blog post, Burden of roof: revisiting housing costs with tidycensus, by Austin Wehrwein is a walkthrough of a real-world application with tidycensus.
Working with audio in R
This category includes anything that deals with audio. These are things that I mostly do in Praat or some other software, but someone has figured out how to do it in R.
- praatpicture by Rasmus Puggaard-Rode lets you make Praat Picture style plots of acoustic data.
- audio.whisper is an R package that lets you interact with OpenAI’s Whisper.
Miscelleny
gt or, the “Grammar of Tables,” the is basically the ggplot2 but for tables.
tidymodels is collection of packages harmoneous with the tidyverse, that mkes it really easy to run models on your data.
Self-explanatory tweets:
As 2019 comes to a close, I want to thank all of the lovely people in the #rstats world who have made my year a professional success. For each person in this thread, I'm going to tweet one thing they've done that I particularly appreciate.
— David Keyes (@dgkeyes) December 31, 2019
Data Visualization
Courses
- Here’s an entire open-access course on Data Visualization by Andrew Heiss, based in R and ggplot2.
Books
ggplot2 by Hadley Wickham is a comprehensive resource for learning all the ins and outs of ggplot2. Version 3 is due in 2020, but you can look through what’s been written so far here.
A ggplot2 grammar guide by Gina Reynolds is a great online resource for figuring out ggplot2 works!
Data Visualization: A Practical Introduction by Kieran Healy. I haven’t had the time to look through it, but this books looks quite good. It covers data prep, basic plots, visualizing statistical models, maps, and a whole bunch of other stuff.
Fundamentals of Data Visualization by Claus O. Wilke is “meant as a guide to making visualizations that accurately reflect the data, tell a story, and look professional.”
Interactive web-based data visualization with R, plotly, and shiny by Carson Sievert is another free online book on data visualization in in R. This has a good focus on interactivity since it involves plotly and Shiny.
Mastering Shiny by Hadley Wickham is under development and will be released late 2020. I’m looking forward to this comprehensive book on Winston Chang’s shiny package a lot actually, but in the meantime though you and I can peruse the online version for free.
Colors
I’ve given a workshop on colors in data visualization, which you can view here. In it, I list the following resources, plus a whole bunch of other ones.
Using colors in data visualization
Your Friendly Guide to Colors in Data Visualisation by Lisa Charlotte Rost is a great overview of using colors in data visualization with lots of links to other sites and resources.
What to consider when choosing colors for data visualization by Lisa Charlotte Rost has great brief tips for color in data visualization. Be sure to see the links at the bottom for more resources!
When you do create your own palettes, be sure to run it through this Color Blindness Simulator to make sure that everyone can see them. Nick Tierney’s blog post also walks you through a way to check this in R.
Stephen Few has a nice guide for using colors and has his own palette you can use.
Masataka Okabe and Kei Ito have a guide called Color Universal Design that is pretty well-known.
Fabio Crameri, Grace E. Shephard & Philip J. Heron’s article in Nature called The misuse of colour in science communication may help you when choosing a color palette.
Prepackaged color palettes
A monster compilation of color palettes in R can be found at Emil Hvitfeldt’s Github.
The
scico
package has a bunch of colorblind-safe, perceptually uniform, ggplot2-friendly color palettes for use in visuals. Very cool.The color brewer website, while best for maps, offers great color palettes that are colorblind and sometimes also printer-safe. The have native integration with
ggplot2
with thescale_[color|fill]_ [brewer|distiller]
functions.Paul Tol has come up with some additional color themes, which you can access with
scale_color_ptol
in theggthemes
package.
oklch-smooth, by Stephen Hutchings, is “a smooth, full spectrum sRGB color palette for data visualization.”
There is no shortage of color palettes. Here are a handful of ones I’ve seen and liked for one reason or another:
nationalparkcolors
: An R package by Katie Jolly with color palettes based on vintage-looking national parks posters.earthtones
: An R package by Will Cornwell where you give it GPS coordinates and it’ll go to that location in Google Maps and create a color palette based on satellite images. Pretty cool.RSkittleBrewer
: An R package by Alyssa Frazee that includes color palettes based on Skittles!pokepalettes.com: A simple webpage that takes a Pokemon name and generates a color palette.
wesanderson
is based on this Tumbler post that has color palettes based on Wes Anderson movies.
dutchmasters
: Instead of coming up with your own colors, why not use ones created by Dutch painters? This is an R package by Edwin Thoen.PrettyCols
by Nicola Rennie.
Colors.css: A nicer color palette for the web look like nice, customizable colors that work great for websites.
Creating your own color palettes
If you want to make your own discrete color scale in R, definitely check out Garrick Aden-Buie’s tutorial, Custom Discrete Color Scales for ggplot2.
Check out the simplecolors package, by Jake Riley, to find hex codes for consistently-named colors.
Definitely check out Adobe’s Color app for some inspiration on color palettes.
Also, check out Coolers for more inspiration on color palettes.
And if you have a start and end point, this Colorpicker app can get colors in between those points.
I’ve needed to do a bivariate cloropleth before, so Timo Grossenbacher’s blog post was helpful because it illustrates what this is and how you can do it in R.
Animation
- Thomas Lin Pedersen’s gganimate package has now made it possible to make really cool animations in R. Sometimes you want to add a bit of pizzazz to your presentation, but other times animation really is the best way to visualize something. Either way, this package will help you out a lot.
Rayshader
Definitely check out Tyler Morgan-Wall’s rayshader package. It makes it pretty simple to make absolutely stunning 3D images of your data in R. You can make 3D maps if you have spatial data, but you can also turn any boring ggplot2 plot into a 3D work of art. Seriously, go try it out.
Lego World Map - Rayshader Walkthrough by Arthur Welle is an awesome walkthrough on rayshader and maps made out of virtual Legos. It’s a lot of fun.
Making better plots
Edward Tufte is a statistician known for his series of four books that focus on best practices in the presentation of data: The Visual Display of Quantitative Information, Envisioning Information, Visual Explanations, and Beautiful Evidence. I read them over several months on the bus and they are very cool. As a practical application of them, this page by Lukasz Piwek shows how to implement many of these visualizations in R. You can also use
ggthemes
to get some of this implementation.Joey Cherdarchuk of Darkhorse Analytics has put together some really succinct presentations on how to simplify things you might put in a paper like maps, charts, tables, and reducing the data to ink ratio.
Claus Wilke’s Practical ggplot2 is a “repository [that] houses a set of step-by-step examples demonstrating how to get the most out of ggplot2, including how to choose and customize scales, how to theme plots, and when and how to use extension packages.”
Malcom Barrett’s Designing ggplots: Making clear figures that communicate is a great walk-through, with code, on how to really make your plots look professional, with emphasis on telling a story.
The Glamour of Graphics, a talk at RStudio::Conf 2020 by William Chase that discusses how to make nice-looking plots.
A ggplot2 Tutorial for Beautiful Plotting in R by Cédric Scherer.
Miscellany
The R Graph Gallery has hundreds of plots, with code, illustrating what the plots are typically used for and different variants of the same plot. Very cool.
My friend Andres Karjus has given several workshops on wide range of data visualization topics, collectively called aRt of the figure: explore and visualize your data using R. You should definitely explore his github and check out his materials.
This blog post by Jesse Sadler is a great tutorial on how to use R to visualize network data.
Plotting special characters or unique fonts can be tricky. Yixuan Qiu’s tutorial showtext: Using Fonts More Easily in R Graphs can help you with that.
George Bailey’s excellent workshop materials for visualizing vowel formant data can be found here.
Not sure what kind of data visualization you should use, try From Data to Viz to help you find the most appropriate plot for your data.
Statistics Resources
General Statistics Knowledge
The American Statistical Association, which is essentially the statistics equivalent in scope and prestige as the the Linguistic Society of America, put out a statement on p-values in 2016. In March of 2019, they followed up with a monster 43-article special issue, Statistical Inference in the 21st Century: A World Beyond p < 0.05, wherein they recommend that the expression “statistically significant” be abandoned. This has potential to be a pivot point in the field of statistics. Why should a linguist care? Well, the first article in that issue says “If you use statistics in research, business, or policymaking but are not a statistician, these articles were indeed written with YOU in mind.” If you use statistics in your research, it might be worth reading through at least the first article of this issue.
The book Modern Dive: An Introduction to Statistical and Data Sciences via R by Chester Ismay and Albert Y. Kim is a free eBook available that teachest the basics of R and statistics. See Andrew Heiss’s post about this book for more information.
Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing by Justin Matejka and George Fitzmaurice. This went viral in some circles and shows that you can get the exact same summary statistics with wildly different distributions. Very cool.
Here’s a BuzzFeed article by Stephanie M. Lee about a researcher who made the news because of his unbelieveable amount of p-hacking and using “statistics” to lie about his data.
Have you learned about tests like t-tests, ANOVA, chi-squared tests? Did you know they’re all just reguression under the hood? Check out this explanation by Jonas Kristoffer Lindeløv called Common statistical tests are linear models. It’s mathy and based in R.
Linear mixed-effects models
Bodo Winter’s mixed-effects modeling tutorials are the best resource I’ve found on using these in linguistics research. It’s a two-part tutorial, so be sure to look through both of them.
Mixed-Effects Regression Models in Linguistics, edited by Dirk Speelman, Kris Heylen, & Dirk Geeraerts and published by Springer is an entire book on mixed-effects models, specifically for linguists.
Michael Clark’s post called Shrinkage in Mixed Effects Models has some beautiful illustrations that demonstrate shirnkage. In fact, he has written a much larger document explaining what mixed-effects models and how to run them in R.
Reference Collection to push back against “Common Statistical Myths” is a crowdsourced compilation (managed by Andrew Althouse) of articles that may be used to argue against some common statistical myths or no-nos.
Lisa M. DeBruine & Dale J. Barr’s paper “Understanding Mixed-Effects Models Through Data Simulation”, in Advances in Methods and Practices in Psychological Science serves as a nice tutorial to mixed-effects modeling.
Stefano Coretta’s brief blog post, On Random Effects helps explain what a random effect even is.
Not sure how to actually run a linear mixed effects model? Try this PDF of Standard Operating Procedures For Using Mixed-Effects Models.
GAM(M)s
My dissertation makes heavy use of generalized additive mixed-effects models (GAMMs). Here are some resources that I used to help learn about these.
Generalised Additive Mixed Models for Dynamic Analysis in Linguistics: A Practical Introduction by Márton Sóskuthy.
How to analyze linguistic change using mixed models, Growth Curve Analysis and Generalized Additive Modeling by Bodo Winter and Martijn Wieling is a tutorial on using GAMs—with one M—and Growth Curve Analysis.
Analyzing dynamic phonetic data using generalized additive mixed modeling: A tutorial focusing on articulatory differences between L1 and L2 speakers of English is another tutorial by Martijn Wieling in the Journal of Phonetics.
In fact, Martijn Wieling has the slides for a graduate course in statistical methods, including GAMMs, avilable on his website.
Studying Pronunciation Changes with gamms by Josef Fruehwald.
Overview GAMM analysis of time series data by Jacolien van Rij. I haven’t had time to go through this one yet, but it’s on my todo list. Actually all of her tutorials look great.
GAMs in R by Noam Ross is a free interactive course on GAMs in R.
Introduction to Generalized Additive Models with R and mgcv by Gavin Simpson.
If you don’t like the visuals in mgcv, try Gavin Simpson’s R package, gratia with some ggplot2 alternatives.
tidymv: Tidy Model Visualisation is an R package by Stefano Coretta that lets you visualize GAMMs using tidyverse-friendly code.
Other Models
I know there are other types of models out there but I haven’t had the opportunity to use them. Here are some resources I’ve found that might be good for me down the road.
15 Types of Regression You Should Know is a post on the blog Listen Data that is a nice overview of different kinds of regression and how to implement them in R.
Course materials for the generalized nonlinear models (GNM) half-day course at the useR! 2019 conference by Heather Turner. Here’s her full-day version from Zurich R Course series.
Bayesian Statistics
I have not yet learned about Bayesian stats, but here are some resources I’ve come across that I may use later.
Bayes Rules! An Introduction to Bayesian Modeling with R by Alicia A. Johnson, Miles Ott, Mine Dogucu.
Richard McElreath’s Statistical Rethinking: A Bayesian Course Using R and Stan is an entire course.
Stefano Coretta, Joseph V. Casillas, and Timo Roettger’s learning materials for their Learn Bayesian Analysis for the Speech Sciences workshop.
Statistics for Linguists
Bodo Winter’s mixed-effects modeling tutorials are the best resource I’ve found on using linear mixed-effects models in linguistics research.
Generalised Additive Mixed Models for Dynamic Analysis in Linguistics: A Practical Introduction by Márton Sóskuthy is the best resource I’ve found on using generalized additive mixed-effects models in linguistics research.
Santiago Barreda and Noah Silbert’s Bayesian multilevel models for repeated-measures data: A conceptual and practical introduction in R is an entire course on Bayesian stats geared towards linguists.
Morgan Sonderegger’s book Regression modeling for linguistic data is a working draft of intermediate book on statistical analysis for language scientists.
Have you used Varbrul or at least read a paper that has? You’ll know that there’s some terminology that is unique to that method. Josef Fruehwald’s video helps translate Varbrul to more contemporary terms.
Jassamyn Shertz’s class notes for LIN318: Talking Numbers. An incredible and detailed free resource for statistics for linguists in R.
Miscelleny
This workshop, Dimension reduction with R, by Saskia Freytag shows different methods for dimension reduction, weighs their pros and cons, and includes examples and visuals of their applications. Pretty useful.
If you use statistical modeling in your research, the
report
package is a useful tool to convert your model into human-readable prose.Here’s an open source course on data science by Danielle Navarro.
Here’s Michael Franke’s Introduction to Data Analysis.
This blog post by Alex Cookson does a cool job at explaining PCA while also including some super cool visuals.
This blog post by Joshua Loftus visualizes least squares as springs. Makes a lot of sense to me!
If you’ve come up with an outlier detection algorithm, try following Sevvandi Kandanaarachchi’s Testing an Outlier Detection Method to see if it works.
Easystats “is a collection of R packages, which aims to provide a unifying and consistent framework to tame, discipline, and harness the scary R statistics and their pesky models.”
Praat Resources
Will Styler’s Praat tutorial is probably the most thorough I’ve seen. The PDF can be found here but don’t forget to look at the page it comes from which has more information about it.
Phonetics on Speed: Praat Scripting Tutorial by Jörg Mayer is what I find myself coming back to again and again.
SpeCT - The Speech Corpus Toolkit for Praat is a collection of well-documented Praat scripts written by Mietta Lennes. I often find my way to this page when I need help for a specific task in Praat and incorporate some of the code in these scripts into my own.
Michelle Cohn has written and posted a bunch of very useful Praat scripts that you can download and use.
A YouTube channel called ListenLab by Matt Winn that has a bunch of video tutorials on how to do stuff in Praat.
Another YouTube channel called Intro to Speech Acoustics that may be useful to students of acoustics, phonetics, etc.
And I’ve written a tutorial on writing a script for basic automatic formant extraction.
Working with audio
There are three main steps for processing audio: transcription, forced alignment, and formant extraction.
Automatic Transcription
There is software available that you can use to transcribe in like Praat, Transcriber, and ELAN. But here are some tools I’ve seen that do automatic transcription.
CLOx is a new automatic transcriber available from the University of Washington. It’s a web-based service that uses Microsoft Bing’s Speech Recognition system to transcribe your audio. It’s estimated that a sociolinguistic interview can be transcribed in a fifth the time as a manual transcription. The great news is that it’s available for several languages!
DARLA is actually a whole collection of tools available through a web interface from Dartmouth University. It can transcribe, align, and extract formants from your (English) audio files all in one go. For automatic transcription, you can use their own in-house system by using the “Completely Automated” method. They admit the transcriptions won’t be perfect, but they provide a handy tool for manual correcting.
OH-Portal is by the Institute of Phonetics and Speech Processing. It works on several languages, and on clean lab data, it’s a little faster to run this and correct the transcription than it is to do a transcription from scratch. Runs entirely through the web browser, so you don’t have to download anything.
Forced Aligners
I’ve got a lot of audio that I need to process, so a crucial part of all that is force aligning the text to the audio. Smart people have come up with free software to do this. Here’s a list of the ones I’ve seen.
DARLA, avilable from Dartmouth University, is the one I’ve used the most. It can transcribe, align, and extract formants from your (English) audio files all in one go. Previously, its forced aligner is built using Prosody-Lab but now uses the Montreal Forced Aligner (see below).
The Montreal Forced Aligner is a relatively new one that I heard about for the first time at the 2017 LSA conference. It is fundamentally different than other ones in that it uses a software called Kaldi. It’s easy to set up and install and I’ve used it on my own data. The benefit of this over DARLA is that it’s on your own computer so you don’t have to wait for files to upload. And you can process files in bulk. Be sure to check out Michael McAuliffe’s blog on updates.
FAVE is probably the most well-known forced aligner. It’s open source and you can download it on your own computer from Joe Fruehwald’s Github page. Or if you’d prefer, you can UPenn’s their web interface instead.
Prosodylab-Aligner is, according to their website, “a set of Python and shell scripts for performing automated alignment of text to audio of speech using Hidden Markov Models.” This is a software available through McGill University that actually allows you to train your own acoustic model (e.g. on a non-English audio corpus). I haven’t used it yet, but if I ever need to process non-English audio, this’ll be my go-to.
SPPAS is a software package with several functions including forced alignment in several languages. Of the aligners you can download to your computer, this might be one of the easier ones to use.
WebMAUS is another web interface with multiple functions including a forced aligner for several languages.
Gentle advertises itself as a “robust yet lenient forced aligner built on Kaldi.” It’s easy to download and use and produces what appear to be very good word-level alignments of a provided transcript. It even ignored the interviewer’s voice in the file I tried. The output is a .csv file, so I’m not sure how to turn that into a TextGrid, and if you need phoneme-level acoustic measurements, a word-level transcription isn’t going to work.
Formant Extractors
Santiago Barreda’s Fast Track is my current go-to tool for automated formant extraction. It’s a Praat plug-in, but it works really well with the accompanying R package, FastTrackR. Give them both a try!
FAVE-Extract is the standard that tons of people use.
PolyglotDB works well with large, force-aligned corpora.
If you want to do write a script yourself, I’ve written a tutorial on writing a script for basic automatic formant extraction.
Phonetics Resources
The rtMRI IPA chart has MRI videos of all the sounds on the IPA chart.
Jonathan Dowse’s IPA Charts with Audio includes basically any possible combination of co-articulatations, regardless of whether they’re actually attested in human language.
- Pink Trombone is an interesting site that has a interactive simulator of the vocal tract. You can click around and make different vowels and consonants. Pretty fun resource for teaching how speech works.
Typography, Web Design, and CSS
I enjoy reading and attempting to implement good typography into my website. Here are some resources that I have found helpful for that.
Beautiful Websites
I designed this website more or less from scratch, so I can appreciate the work others put into their own academic sites. Here are some examples of beautiful websites that I have found that I really like.
Kieran Healy has one of the beautiful academic websites I’ve ever seen. I created this category on this page just so I could include his page on here. Wow.
Practical Typography by Matthew Butterick is was my gateway into typography. My font selection and many other little details on my site (slides, posters, CV, etc.) were influenced by this book.
CSS
- If you enjoy the work of Edward Tufte and would like to incorporate some of his design principles into your website, you’ll be interested in Tufte CSS by Dave Liepmann. If you’re interested in your RMarkdown files rendering in a Tufte-style (like this), there are ways to do that too, which you can read in chapter 3 of bookdown by Yihui Xie or chapter 6 of R Markdown, by Yihui Xie, J. J. Allaire, and Garrett Grolemund.
Academic Life
Occasionally, I’ll see posts with really good and insightful tips on how to be an academic. For the ones I saw I Twitter, I’ve put the first post here: click on them to go directly to that tweet where you can read the rest.
How to make effective slides by Kieran Healy.
Advice to a young scholar by Kensy Kooperrider.
Twitter for Scientists by Daniel S. Quintana has all insider tips and recommendations for how to use Twitter as an academic.
A list of self-explanatory tweets:
Hey academics-coming-up! Congratulations on sending out that article! However, that probably also means, a few months later, you got your article rejected. Not even a Revise and Resubmit. Worry not. It happens to all of us, most of the time. Here's a thread on what I do.
— Jeff Guhin (@jeffguhin) November 12, 2019
I finally went through all my bookmarked tweets to compile a list of resources I want my grad students to have and wanted to (1) thank everyone who posted these resources, and (2) pay it forward and share the compiled list with all of you!
— Kaitlin Fogg (@kaitlin_fogg) November 8, 2019
After reading approximately 30 applications over the past few days that explicitly requests a diversity statement. I got some notes on what to do and what not to do. The "DON'T" list is long but please bear with me. But first, lets define a diversity statement (1/x) pic.twitter.com/qx1e8EyIGJ
— Dr. Samniqueka Halsey (@Samniqueka_H) December 30, 2019
Use less text.
— Timo Roettger (@TimoRoettger) March 1, 2020
One of the most important tips for creating engaging scientific presentations is reducing text as much as possible. The audience is not there to read but to listen to you 1/7
@AcademicTwitter #AcademicChatter pic.twitter.com/ybR7cSRor2
How to revise:
— Michael Breakspear (@DrBreaky) June 19, 2020
As an editor and author I have seen many revised papers return to journals. Given effort, most go well (ie step toward acceptance). Some go pear-shaped. I’ve slowly improved and have an approach known by my group as the ’Breakspear method”. Here is its essence
Here’s what you’ll need to prepare if you want to pitch yr academic book project to a publisher this year:
— Laura Portwood-Stacer, Jeopardy Champ (she/her) (@lportwoodstacer) January 2, 2021
1. A working title for the book. Don’t worry, you can change it later.
2. A project description or overview. Summarize your main argument, how you prove it, why it matters
A review of 2020 reviews & a 🧵of jumbled thoughts:
— Koraly Pérez-Edgar 🇵🇷 (@Dr_Koraly) January 3, 2021
Ad-hoc Review requests received: 109
Requests accepted: 37
Action Editor ms for J1: 35
Action Editor ms for J2: 86
Thoughts on the current state of review:
1/
Here's some of the best advice I got when I became a manager last year! It's simple, but considering most people receive no management training whatsoever these days, it's better than nothing. Thread!
— ella dawson (@brosandprose) December 6, 2019
It is that time of the year where many aspirants will be applying for grad school and tenure track positions. I just wanted to share some advice that I wish I had known when I was going through these things. [continued below]
— 𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞 (@hima_lakkaraju) November 24, 2019
Miscellaneous
Just random stuff that doesn’t fit elsewhere.
The great American word mapper is an interactive tool put together by Diansheng Guo, Jack Grieve, and Andrea Nini that lets you see regional trends in how words are used on Twitter.
Collecting, organizing, and citing scientific literature: an intro to Zotero is a great tutorial on how to use Zotero by Mark Dingemanse. Zotero is a fantastic tool for, well, collecting, organizing, and citing scientific literature and I’m not exaggerating when I say that I could not be in academics without it.
Vulgar: A Language Generator is a site that automatically creates a new conlang, based on parameters that you specify. The free web version allows you to add whatever vowels and consonants you’d like to include, and it’ll create a full language: a language name; IPA chart for vowels and consonants; phonotactics; phonological rules; and paradigms for nominal morphology, definite and indefinite articles, personal pronouns, and verb conjugations; derivational morphology; and a lexicon of over 200 words. For $19 you can download the software and get a lexicon of 2000 words, derivational words, random semantic overlaps with natural languages, and the ability to customize orthography, syllable structure, and phonological rules. In addition to just being kinda fun, this is a super useful resource for creating homework assignments for students.
The EMU-webApp “is a fully fledged browser-based labeling and correction tool that offers a multitude of labeling and visualization features.” I haven’t given this enough time to learn to use it properly, but it seems very helpful.
Jonhannes Haushofer’s CV of Failures. Other people have written this more elegantly than I could, but sometimes it’s nice to see that other academics fail too. You’re not going to get into all the conferences you apply for, your papers are sometimes going to be rejected, and you’re definitely not getting all the funding you apply for. I find it therapeutic to put together a CV of failures like his researcher did and to keep it updated and formatted just as would a regular CV. Don’t let impostor syndrome get in the way by thinking others haven’t failed too.
Kieran Healey’s The Plain Person’s Guide to Plain Text Social Science is an entire book on an aspect of productivity that I’ve only thought about occasionally: what kind of software should you do your work? Before you get too entrenched in your workflow, it’s good to consider what your options are.
ThisWordDoesNotExist.com is a fun site created by Thomas Dimson.
Niche for fellow Mormons, but this post by “Ziff” called “Church President Probabilities, Changes with the Death of One Q15 Member” is a really in-depth analysis that predicts who the next president of the church will be.
XKCD’s color survey is always fascinating to me. He displayed a random color and asked people to name it. People could retake the survey as much as they wanted. Hundreds of thousands of responses later, and he came up with a really cool crowd-sourced visualization of how English speakers categorize colors.
FiveThirtyEight’s “The Ultimate Halloween Candy Power Ranking”. They took a couple dozen Halloween candys, displayed images of two of them at random, and asked people which they’d rather have. Many, many responses later, and they have a nice ranking of people’s favorite candy.