What models do you train for fun, and what GPU/hardware do you use?

This question tests whether you’ve worked on machine learning projects outside of a corporate role and whether you understand the basics of how to resource projects and allocate GPU-time efficiently. Expect questions like this to come from hiring managers that are interested in getting a greater sense behind your portfolio, and what you’ve done independently.

In a machine learning interview, when asked about what models you train for fun and what GPU/hardware you use, your response should aim to showcase your passion for the field, your willingness to experiment and learn, and your proficiency with various models and hardware setups.

Here’s a sample answer:


“For fun, I enjoy experimenting with a variety of machine learning models, constantly exploring new techniques and algorithms. Some of my favorite models to train in my spare time include convolutional neural networks (CNNs) for image recognition tasks, recurrent neural networks (RNNs) for sequential data analysis, and generative adversarial networks (GANs) for creative applications like image generation.

Regarding hardware, I currently use a combination of GPU and CPU resources depending on the scale and complexity of the models I’m working with. For GPU acceleration, I typically utilize NVIDIA GPUs such as the GeForce RTX series or Quadro series, leveraging their CUDA cores for parallel processing. Additionally, I also leverage cloud-based GPU instances from providers like AWS, Google Cloud, or Azure when I need to scale up my experiments or tackle larger datasets.

Overall, my approach is to stay adaptable and use the appropriate hardware for the task at hand, ensuring efficient training and experimentation while keeping up with the latest advancements in the field.”


This response demonstrates your enthusiasm for machine learning, your familiarity with a range of models, and your ability to leverage different hardware setups effectively. It also shows that you’re open to experimentation and utilizing both local and cloud-based resources as needed.