Abstract The predictive accuracy of regional hydrologic models often varies across both time and space. Interpreting relationships between watershed characteristics, hydrologic regimes, and model performance can reveal potential areas for model improvement. In this study, we use machine learning to assess model performance of a regional hydrologic model to forecast the occurrence of streamflow drought. We demonstrate our methodology using a regional long short‐term memory (LSTM) deep learning model developed by the U.S. Geological Survey (USGS) and data from 384 streamgages across the Colorado River Basin region. Performance was assessed by clustering catchments using: (a) physical and climatological catchment attributes, and (b) streamflow drought signatures time series. We examined the association of USGS LSTM model error measures with clusters generated by both approaches to interpret meaningful spatial and temporal information about LSTM model performance. Clustering static catchment attributes identified elevation, degree of streamflow regulation, baseflow contribution, catchment aridity, and drainage area as the most influential attributes to model performance. Clustering gages by their drought signatures revealed that catchments with significant seasonal peak runoff between January and June generally exhibited better model performance. Additionally, a Random Forest classifier was trained to successfully predict LSTM model performance (F1 score of 0.72) based on physical and climatological catchment attributes. Low degree of flow regulation was identified as a key indicator of better LSTM model performance. These findings point to the opportunities for improving the USGS LSTM model performance in future hydrologic drought prediction efforts across regional and CONUS scales.