Volume 5 • 2022
Paper
Evaluating Structural Economy Claims in Relative Clause Attachment
Aniello De Santo and So Young Lee
2022-02-01 Volume 5 • 2022 • 65-75
Learning Argument Structures with Recurrent Neural Network Grammars
Ryo Yoshida and Yohei Oseki
2022-02-01 Volume 5 • 2022 • 101-111
A Model Theoretic Perspective on Phonological Feature Systems
Scott Nelson
2022-02-01 Volume 5 • 2022 • 1-10
Representing Multiple Dependencies in Prosodic Structures
Kristine M. Yu
2022-02-01 Volume 5 • 2022 • 171-183
Inferring Inferences: Relational Propositions for Argument Mining
Andrew Potter
2022-02-01 Volume 5 • 2022 • 89-100
Parsing Early Modern English for Linguistic Search
Seth Kulick, Neville Ryant and Beatrice Santorini
2022-02-01 Volume 5 • 2022 • 143-157
SCiL 2022 Editors\' Note
Allyson Ettinger, Tim Hunter and Brandon Prickett
2022-02-01 Volume 5 • 2022
Remodelling complement coercion interpretation
Frederick G Gietz and Barend Beekhuizen
2022-02-01 Volume 5 • 2022 • 158-170
A split-gesture, competitive, coupled oscillator model of syllable structure predicts the emergence of edge gemination and degemination
Francesco Burroni
2022-02-01 Volume 5 • 2022 • 11-22
How Well Do LSTM Language Models Learn Filler-gap Dependencies?
Satoru Ozaki, Dan Yurovsky and Lori Levin
2022-02-01 Volume 5 • 2022 • 76-88
ANLIzing the Adversarial Natural Language Inference Dataset
Adina Williams, Tristan Thrush and Douwe Kiela
2022-02-01 Volume 5 • 2022 • 23-54
Learning Stress Patterns with a Sequence-to-Sequence Neural Network
Brandon Prickett and Joe Pater
2022-02-01 Volume 5 • 2022 • 112-118
Typological Implications of Tier-Based Strictly Local Movement
Thomas Graf
2022-02-01 Volume 5 • 2022 • 184-193
Linguistic Complexity and Planning Effects on Word Duration in Hindi Read Aloud Speech
Sidharth Ranjan, Rajakrishnan Rajkumar and Sumeet Agarwal
2022-02-01 Volume 5 • 2022 • 119-132
Abstract
The interaction between cognitive ease and informativeness shapes the lexicons of natural languages
Thomas Brochhagen and Gemma Boleda
2022-02-01 Volume 5 • 2022 • 217-219
Learning Input Strictly Local Functions: Comparing Approaches with Catalan Adjectives
Alexander Shilen and Colin Wilson
2022-02-01 Volume 5 • 2022 • 244-246
When Classifying Arguments, BERT Doesn't Care About Word Order... Except When It Matters
Isabel Papadimitriou, Richard Futrell and Kyle Mahowald
2022-02-01 Volume 5 • 2022 • 203-205
Extended Abstract
Analysis of Language Change in Collaborative Instruction Following
Anna Effenberger, Eva Yan, Rhia Singh, Alane Suhr and Yoav Artzi
2022-02-01 Volume 5 • 2022 • 194-202
MaxEnt Learners are Biased Against Giving Probability to Harmonically Bounded Candidates
Charlie O\'Hara
2022-02-01 Volume 5 • 2022 • 229-234
Learning Constraints on Wh-Dependencies by Learning How to Efficiently Represent Wh-Dependencies: A Developmental Modeling Investigation With Fragment Grammars
Niels Dickson, Lisa Pearl and Richard Futrell
2022-02-01 Volume 5 • 2022 • 220-224
Can language models capture syntactic associations without surface cues? A case study of reflexive anaphor licensing in English control constructions
Soo-Hwan Lee and Sebastian Schuster
2022-02-01 Volume 5 • 2022 • 206-211
Universal Dependencies and Semantics for English and Hebrew Child-directed Speech
Ida Szubert, Omri Abend, Nathan Schneider, Samuel Gibbon, Sharon Goldwater and Mark Steedman
2022-02-01 Volume 5 • 2022 • 235-240
Masked language models directly encode linguistic uncertainty
Cassandra Jacobs, Ryan J. Hubbard and Kara D. Federmeier
2022-02-01 Volume 5 • 2022 • 225-228