With the rising popularity of open-source-based solutions, high performance computing (HPC) has infiltrated research labs, data centers, financial institutions, and engineering suites. Given these increasingly complex environments, IT professionals face an ongoing challenge: finding suitable tools to effectively monitor, manage, and configure complex cloud-computing installations and HPC clusters. To tackle this challenge, PSSC Labs combines high performance, open-source components with proven Intel® Xeon® processor-based platforms to create competitive solutions for the top tiers of computing requirements.
Still looking for a way to enhance your high-performance server satifaction in 2013? Look no more. In this post, we would like to highlight a feature product available to you straight from our labs. Introducing the POWERSERVE DUO T2000, the perfect building block for your Cluster & Cloud computering environment.
It offers two complete and independent servers in just 1U of rack space. Furthermore, there is no backplane or shared power dupply found in other blade server offerings and you can mix and match the configurations to best suite your computing needs. ½ of the rack can be based on Intel and the other ½ based on AMD processors.
In creating this project, we understood that time is money. This product has effortless maintenance and reduced support costs are at the heart of system design. All routine services and upgrades can be done by simply pulling the servers from the rack for easy access to all the components. Dream come true? We think so.
If you would like more information on giving your computing New Year’s resolution a real burst of power, be sure to navigate through our supercharged workstation page within our website.
Astrophysics Modeling Specialists at our offices are thrilled to hear about the Mars Curiosity Rover finding Organic Signal on the red planet. Curiosity discovered complex chemistry and hints of long-sought organic compounds that may have aided primitive life at one time. Traces of chlorine, sulfur, water, and organic compounds were found in the Mars dirt samples that curiosity scooped from the surface. The NASA team back on Earth is still unsure as to whether these compounds actually come from Mars or from contamination of the transported robot.
An article by Space.com discusses how Curiosity has a SAM Instrument that is basically a teeny tiny oven that cooks Mars’ dirt samples. The gases produced by the dirt allow Curiosity to determine the chemical makeup. Additionally, Curiosity’s Chemical and Mineralogy detector found that the terrain is a mix of volcanic, glassy, non-crystallized materials.
But this doesn’t make the end of Mars discoveries. NASA has already approved a new Rover to launch to Mars in 2020. This rover should assist NASA to prepare for eventually brining samples from Mars back to Earth. These new samples will provide more information to scientists in regards to life existence on the red planet.
We at PSSC Labs are excited to see what new information Curiosity will send to NASA’s high-performance server. Be sure to check back for more news and information on the industry. For more information on Curiosity and the 2020 mission, check out an article by Space.com, by clicking here.
Astrophysics supercomputer specialists are absolutely pumped for this galaxy news hitting the internet. The Gravity Recovery and Interior Laboratory (GRAIL) recently created a moon gravity map revealing the moon’s internal structure and composition in extraordinary detail. This new map reveals a literal mountain of features including tectonic structures, volcanic landforms, basin rings crater central peaks and tons of craters. The gravity map also shows that the moon’s gravitational field is completely unique to any other planet in our solar system.
These amazing maps were created from twin NASA probes that transmitted radio signals to define precise distances between them and the moon’s orbit formation. This information is then sent to a high-performance server back on Earth that assists scientists in creating these amazing images.
These breathtaking photos are only the beginning. The mission is set to continue to an extended phase beginning Aug. 20 and will continue through Dec 17. For more information and images regarding the mission to study the moon’s gravity, visit NASA’s website here.
News from the UK reports that the world’s oldest high-performance supercomputer has been rebooted and is now a clattering and flashing body of power after a three-year restoration effort. The machine is known as WITCH and was the hardware for the UK’s atomic energy research program in the 1950s. Forgotten with the advancing technology, WITCH was accidentally rediscovered in a municipal storeroom where it waited for 15 years. Now that is a pleasant surprise.
Our computational-modeling enthusiasts can’t help but feel a little nostalgic about this piece. To think – this is where the supercomputing industry began. It is pretty incredible; being that it was only 61 years ago. It really brings to light just how quickly technology advances. Not that long ago, supercharged workstations didn’t even exist. When working with them every day, this time frame is pretty hard to imagine in the long-term perspective but it is also inspiring. We can only imagine what strides the next decade of technology will bring.
To read more about WITCH and its rediscovery, check out an article by Extreme Tech here.
Twitter may be full of chitter-chatter, news and simple statements, but it is also an incredible compilation of data where high-performance servers can draw insightful information and graphics. During hurricane Sandy, SGI partnered with the University of Illinois to scan 50 million international tweets in the Global Twitter Heartbeat and created real-time heat maps of positive and negative tweets.
According to SGI’s Facebook page, “The project analyses every tweet to assign location (not just GPS-tagged tweets, but processing the text of the tweet itself). And tone values and then visualizes the conversation in a heat map infographic that combines and displays tweet location, intensity and tone.” The computational modeling of all of this information could prove very telling in the future in regards to opinion surveys and psychological analysis what is happening in our nation. Take a look at the video below and it will give you a good idea of people’s negative and positive thoughts during Sandy’s trek up the East Coast.
Oak Ridge Nation Laboratory’s supercomputer, Titan, is considered the most powerful of all high-performance computers. Surprisingly, it is also one of the most energy efficient. Ranking as number three on the Green500 list , the exciting news was announced Wednesday, November 14. The position on the list is reflective of the significant power saving attributes to Titan’s hybrid architecture that evolved from combining graphics processing units (GPUs) from gaming systems and traditional CPUs.
The system has 18,688 nodes that each has a 16-core AMD Opteron 6274 processor and an NVIDIA Tesla K20X GPU accelerator. This is only on top of its 700 terabytes of memory storage. The hardware itself is pretty intense and can handle hundreds of calculations simultaneously by relying on 299,008 CPU cores. These cores guide simulations, allowing its NVIDIA GPUs to do some heavy lifting.
Titan is sure to help the super-charged workstation enthusiasts by letting them complete their work with greater speed and accuracy. We, at PSSC labs, are especially happy to see that a greener way of doing business is setting the bar for the future of supercomputers.
For more information on Titan, refer to this Product Design and Development article here.
We’re going to need more than one Powerwulf Cluster to support Big Data, a compilation of all the blogs, status updates, forum discussions, video and audio content created, exchanged, curated, and archived by corporations, businesses, institutions, non-profits and consumers. Now that is a lot of information. This bulk of statistics is approximately 10,000 times larger than the average enterprise database.
With the movement to social media, Big Data is essentially the “Holy Grail” in delivering acceptable social-media ROI. These large numbers will allow for marketers and business leaders to make predictable approaches for clients and consumers. It will call for complex cluster and cloud computational modeling of the enormous amounts of information of behavior on the World Wide Web. Businesses will need to run reactive analytics to assess the live data and deep analytics to dig into stored information and make educated predictions about the market. Companies ignoring this available knowledge will run the risk of being left in the dust behind competitors. For those in advertising, marketing, and public relations, understanding how companies can use Big Data will be a very important element of a successful professional career.
So what will this data look like? Pictured below, IBM created a visualization of Wikipedia edits. Being multiple terabytes in size, the text and images of Wikipedia are a classic example of what Big Data will entail.
A little overwhelmed? You aren’t the only one. With this amount of information being available to so many businesses and individuals, there is going to be a ton of professionals in need of ultimate storage solutions. With Big Data still being defined, you have time to learn and be an innovator when you have it at your fingertips. Want more information about this information revolution? Read more about Big Data in this article.
Forecasting superstorms, like Hurricane Sandy, takes a tremendous amount of weather computing power. The computer at the National Center of Environmental Prediction processes 74 trillion calculations per second. Even with all this muscle, the resulting image can only predict the storm two weeks in advance. Data for weather predictions is gathered from weather balloons, satellites, commercial airplanes, ocean buoys and weather stations. All of this data is fed into a supercomputer and stored in a storage system, much like the Powerwulf cluster. From there, the supercomputer uses differential equations to model predictions every six hours. These predictions allow scientists to track the intensity of the storm. Unfortunately, the weather doesn’t always cooperate with these models and weather patterns need to be studied from thousands of miles away. For example, in order to predict weather in Boston, scientists look at weather in Hawaii a week beforehand and any slight or unnoticed changes in the atmosphere can amplify to huge transformations in a week’s time.
With Sandy’s case, in the five days before the storm hit land, it was predicted that the storm would glide off to the ocean and no emergency warning was needed. However, as we all know, it collided with the New Jersey shore, causing flooding, fires, power outages and over 100 casualties. The reason for the wrong prediction was because the storm was so unusual and so many factors went into the prediction. It came late in the hurricane season and was fueled by unseasonably warm waters. It then got pushed inland by a high-pressure system that also merged with a cold front that moved east from California. This combination was enhanced by a full moon and a high tide creating an intense superstorm with millions of dollars of damage to the East Coast.
To read more about Sandy and what goes into forecasting a storm of that caliber, read a PBS article here.
The Department of Defense issued a challenge for an institution to design and engineer supercomputer beings for intense, disaster-response purposes. The task is to build an adult-sized robot that must be designed to enter a vehicle, drive it, exit the vehicle, walk over rubble, clear objects blocking a door, open the door and enter a building. From that point, the being must visually and audibly locate and shut off a leaking vale, connect a hose or connector, climb a ladder and traverse an industrial walkway.
Sounds intense? There’s more. The robot must complete the task by using a power tool to break through a concrete wall. To put more pressure on this there is a time limit to complete these tasks. The ultimate goal is to create a being that is able to infiltrate a disaster zone to save lives where it is too dangerous for humans to attempt.
Included in the challenge is Virginia Tech, who has created their prototype (image above) named THOR – Tactical Hazardous Operations Robot. THOR is “light, agile, and resilient with perception, planning and human interface technology that infers a human operator’s intent.” Seven international teams will partake in the challenge, developing hardware and software systems similar to Powerwulf Cluster configuration for rescue simulation for the robots. Check out the video of this competitor confronting obstacles.
Teams will compete against each other in June of 2013 beginning with the Virtual Robotics Challenge. Next, will be a live event in December 2013 and another December 2014. Teams that advance from each track will receive additional funding and the winner will receive an additional two million dollars.