Randomized controlled trials (RCT) are the gold standard in causal inference and are often used by decision-makers in public and private sectors. However, RCTs are often expensive and take a long time to complete, making predicting its effects prior to completion practically important. In this paper, we test the forecasting accuracy of groups of experienced forecasters, academic experts, and lay people in predicting such long-run causal effects. To do this, we recruit 511 unique participants and collect 25,980 individual forecasts. Participants predict the short-run as well as long-run effects of seven RCTs that collected data on a treatment effect with a duration of at least 5 years. We find that experienced forecasters and academics outperform lay people. We also find evidence that experienced forecasters perform better than academics on our primary accuracy measure, and that this is, at least in part, due to their superior calibration. While we do find aggregation effects suggesting a wisdom of the crowds effect, we also show that neither experienced forecasters nor academics consistently outperform a set of simple benchmarks. Additionally, we also fail to find evidence that randomly assigned information about good forecasting practices, details about the RCT intervention, or the local study context improves forecasting accuracy. Overall, our finding that experienced forecasters can outperform academic experts is a novel addition to an otherwise mixed literature. However, the general pattern of results is largely in line with other work, showing that forecasting long-run events is extremely difficult and that improving forecasting performance remains somewhat intractable.