Feedback on how students progress through completing sub- goals can improve students’ learning and motivation in programming. Detecting subgoal completion is a challenging task, and most learning environments do so either with expert-authored models or with data-driven models. Both models have advantages that are complementary – expert models encode domain knowledge and achieve reliable detection but require extensive authoring efforts and often cannot capture all students’ possible solution strategies, while data-driven models can be easily scaled but may be less accurate and interpretable. In this paper, we take a step towards achieving the best of both worlds – utilizing a data-driven model that can intelligently detect subgoals in students’ correct solutions, while benefiting from human expertise in editing these data-driven subgoal rules to provide more accurate feedback to students. We compared our hybrid “humanized” subgoal detectors, built from data-driven subgoals modified with expert input, against an existing data-driven approach and baseline supervised learning models. Our results showed that the hybrid model outperformed all other models in terms of overall accuracy and F1-score. Our work advances the challenging task of automated subgoal detection during programming, while laying the groundwork for future hybrid expert-authored/data-driven systems.