We simulate the evolution of a domain language in small speaker communities. Data from experiments (Garrod et al., 2007; Fay et al., 2008) show that human communicators can evolve graphical languages quickly in a constrained task (Pictionary), and that communities converge towards a common language even in the absence of feedback about the success of each communication. We postulate that simulations of such horizontal evolution have to take into account properties of human memory (cue-based retrieval, learning, decay). We implement a model that can draw abstract concepts through sets of non-abstract, related concepts, and recognize such drawings. The knowledge base is a network with association strengths randomly sampled from a natural distribution found in a text corpus; it is a mixture of knowledge shared between agents and individual knowledge. In three experiments, we show that the agent communities converge, but that initial convergence is stronger when communities are structured so that the same pairs of agents interact throughout. Convergence is weaker in communities when agents do not swap roles (between recognizing and drawing), predicting the necessity of bi-directional communication in domain language evolution. Average and ultimate recognition performance depends on how much of the knowledge agents share initially.