Forecasting superstorms, like Hurricane Sandy, takes a tremendous amount of weather computing power. The computer at the National Center of Environmental Prediction processes 74 trillion calculations per second. Even with all this muscle, the resulting image can only predict the storm two weeks in advance. Data for weather predictions is gathered from weather balloons, satellites, commercial airplanes, ocean buoys and weather stations. All of this data is fed into a supercomputer and stored in a storage system, much like the Powerwulf cluster. From there, the supercomputer uses differential equations to model predictions every six hours. These predictions allow scientists to track the intensity of the storm. Unfortunately, the weather doesn’t always cooperate with these models and weather patterns need to be studied from thousands of miles away. For example, in order to predict weather in Boston, scientists look at weather in Hawaii a week beforehand and any slight or unnoticed changes in the atmosphere can amplify to huge transformations in a week’s time.
With Sandy’s case, in the five days before the storm hit land, it was predicted that the storm would glide off to the ocean and no emergency warning was needed. However, as we all know, it collided with the New Jersey shore, causing flooding, fires, power outages and over 100 casualties. The reason for the wrong prediction was because the storm was so unusual and so many factors went into the prediction. It came late in the hurricane season and was fueled by unseasonably warm waters. It then got pushed inland by a high-pressure system that also merged with a cold front that moved east from California. This combination was enhanced by a full moon and a high tide creating an intense superstorm with millions of dollars of damage to the East Coast.
To read more about Sandy and what goes into forecasting a storm of that caliber, read a PBS article here.