Put another way, does creating a program that perfectly mimics talking to a human reveal anything about the nature of language (is it even a useful product)? Does research in modern machine learning techniques and architectures (e.g., RNNs, LSTMs, etc.) tell us anything about language (can we really commercialize this tech long term; such as building it into an operating system)? Do current mathematical techniques underlying machine learning methods count as a theory of human language (will these techniques be useful in industry in 10 years -- should I invest in building a huge, stable, performant code base of these techniques)? We don't know. Primarily we don't know because we have yet to build good enough programs to meet our standards. And when we do, will those programs, language models, architectures, etc... generalize across human languages?