As a part of this work we study the use of neural ranking models for the task of Knowledge Graph Question Answering (KGQA). First, we propose a novel Slot Matching model which exploits the characteristics of knowledge graphs, and outperforms several other baselines. Second, we show the effectiveness of transfer learning based techniques, including the use of pre-trained lan- guage models, and fine-tuning from one KGQA dataset to another. Further, we explore methods to automatically generate natural language questions, and study their effects on model performance. In addition, we also perform robustness analysis via slightly out of domain samples and noisy examples.