Large Language Models operate in tokens. Usually from Byte-pair encoding on text (some of them run the algorithm on unicode). So LLM's don't natively process anything less that a token. Which is why they are bad at breaking words down into characters.
But more broadly, what would it even mean to communicate without language?