Financial Trading Infrastructure: The Era of Cloud 2.0

Jacob Loveless and Howard Lutnick at Cantor Fitzgerald, NYC 12-20-12Guest Contributor: Jacob Loveless, CEO, Lucera Financial Infrastructures

The freedom to try new things
The equity downturn has fueled a trend in multi-asset trading that is prompting firms to test new strategies. They realize they can no longer merely trade or price a single asset class. To compete, they must have asset diversification and multi-asset trading strategies– but many lack the freedom, infrastructure scalability and resources to do so.

Historically, a firm would have to wait weeks or months to arrange the infrastructure components required to procure, deploy and test a trading strategy in a new asset class or location. This lengthy process slows time-to-market and creates a large resource and monetary investment up-front – a barrier to innovation.

Managed trading services give these firms the ability to quickly deploy secure, high-performance systems, lower total cost of ownership (TCO), predict and scale monthly expenditure and create new possibilities for trade innovation, strategy development and alpha generation. That means financial trading firms can test applications and new ideas in close to real-time, while predicting and controlling costs.

For these reasons, Aite Group projects that global spend on managed services will increase from $500 million in 2012 to $620 million by 2015, and Tabb Group estimates that by 2016 adoption of managed services infrastructure across companies will hit 50%. With efficiency and scalability now under control, organizations are looking to their infrastructure to solve greater problems.

A big red button scenario
High-profile trading freezes and glitches have drawn considerable attention to the industry’s need for a “kill” switch or “big red button.” These could be used in a situation where dangerous order flow needs to be halted to minimize market impact. Regulators agree this could minimize risk but hotly debate who should be able to push that “big red button” and how much of the infrastructure it should shut down when pushed.

The difficulty with the proposed “kill switch” is that it would shut the firm off from the entire market by preventing the flow of information in and out of the company. While in the short-term it would prevent that company from sending potentially compromised orders out into the market, it also handicaps the firm from receiving information from the market that could help identify and reconcile the issue.

Take the centralized limit order book where all participants push data as an example. If something goes wrong and the order book is affected, the firm has to bring the whole system down. But what about a scenario when only one server is impacted? What effect would it have if only the compromised portion of the infrastructure was taken offline? Or better yet, what if an exchange could turn off one market participant from sending orders but still allow them to receive data in order to quickly reconcile its issue without impacting the rest of the market?

These scenarios demonstrate the importance of being able to segment infrastructure into zones – a technique that is becoming critical to deliver operational advantage. The ideal big red button scenario would allow the system to react quickly to protect the business and the market and only turn off the piece of the infrastructure experiencing failure. In the event of a problem in a software-defined network, a company can self-select to shut down a compromised zone, remaining fully operational while the issue is addressed internally. This zoning technique guards both the participant, and the market.

A better cloud model: Cloud 2.0
The traditional multi-tenant cloud model has not been able to meet the latency demands of trading applications, marking a considerable barrier for cloud-based infrastructure. It also does not allow for data collocation. Companies now have to ship data to different data centers and pull it back up over a virtual private network, which increases costs because of the shared storage and bandwidth. Using a single tenant system allows for better performance and is more cost effective.

The move to Cloud 2.0 will not only speed time-to-market, promote innovation and remove cost pressures associated with traditional infrastructure, it can give companies the operational advantage they need to compete in today’s complex financial markets. Firms that embrace Cloud 2.0 will be empowered to utilize new trading strategies and enter new markets with greater control, predictability and scalability around their costs. Disaster scenarios can be more easily contained by understanding how to use a software defined network and zoning to more intelligently respond to infrastructure challenges that might traditionally cripple a company or impact the market. With latency no longer the most important differentiator for firms, the era of Cloud 2.0 will allow firms to meet complex infrastructure requirements in a high-performance, secure environment that can continuously evolve to solve the next big challenges in the market.

This entry was posted in Cloud, Guest Blog, Uncategorized and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s