Smart phones, voice assistants, and home robots are becoming more intelligent every day to support humans in their daily routines and tasks. Achieving the acceptance of such technologies by their users and ensuring their success make it necessary for them to be socially informed, responsive, and responsible. They need to understand human behaviour and socio-emotional states and adapt themselves to their user’s profiles (e.g., personality) and preferences. Motivated by this, there has been a significant effort in recognising personality from multimodal data in the last decade~\cite{survey1,survey2}. However, to the best of our knowledge, the methods so far have focused on one-fits-all approaches only and performed personality recognition without taking into consideration the user’s profiles (e.g., gender and age). In this paper, we took a different approach, and we argued that the one-fits-all approach does not work sufficiently for personality recognition as previous research showed that there are significant gender differences in personality traits. For example, women tend to report higher scores for extraversion, agreeableness and neuroticsm as compared to men. Building upon these findings, we first clustered the participants into two profiles based on their gender, namely, female and male, and then used Neural Architecture Search (NAS) to automatically design a model for each profile to recognise personality. Each network was trained with visual, textual and time-based features separately. The final prediction was obtained by aggregating the results of both video and text modalities.