Big Data seems to be synonymous with Hadoop and I feel that the hype around Hadoop has reached a new peak: all the hardware and solution providers are positioning themselves on hadoop, be-it through specific distributions (MapR offers a slimmed-down distribution) or through the addition of platform-specific optimisations (Intel optimised hadoop for Intel CPUs, DDN for HPC disks). The adoption of the technology seems to be quite significant outside of Investment Banking but unfortunately still limited in our industry – it was interesting to see the Hadoop panel of experts struggling to identify clear use cases for investment banking. Still a technology looking for a use case, (the only two suggested ideas are for now trade surveillance and models back-testing but with no evidence that hadoop has been used for this in the industry) but I put my money on the fact that technologists will find a way to leverage all this incredible R&D effort around Hadoop – Excelian has for instance implemented Amazon EC2’s Hadoop PaaS offering (Elastic Map Reduce) for a client in order to build an open source off-premise compute grid – not exactly a big data use of hadoop, but a very interesting by-product.
More broadly, as I highlighted in my presentation further during the day on trends on HPC, NoSQL solutions are starting to slowly be adopted in finance and the pressure on costs will certainly accelerate this movement in the coming 12 months. On this topic, the MongoDB presentation was very informative and shows (again) how rich the NoSQL ecosystem is.
On compute-intensive side of things, there were various conversations around the “right” hardware platform for performance – another typical CPU vs GPU vs FPGA vs Xeon Phi debate. Key question is who do you want to be locked-in with: a vendor API (CUDA, VHDL…), a generic API that has lost a little traction in the industry (OpenCL) or a 3rd party vendor (Xcelerit). An interesting presentation for Intel showed what has been known for a while in the GPU world: you really need a seriously parallel problem to get anything out of Xeon Phi. The advantage over Nvidia’s GPUs? All the code optimisations made to leverage parallel processing capabilities for Xeon Phi (typically the use of #pragmas, specific memory allocation functions and optimised routines) will also increase the performance of the same code when run on traditional Xeon processors. As highlighted in Excelian’s HPC benchmark, adoption of GPUs has increased significantly over the last 12 months and the adoption of specialist platforms it is likely to continue with Intel being present on this market as well now. The question is: how quickly Intel will catch-up Nivida on performance, tools etc. and how will the market react to this different value proposition?
The vendor-heavy panel on network monitoring highlighted today’s challenges around putting in place end-to-end business monitoring when looking at it from the lowest-level (the network) – I would say the jury is still out to determine if the approach using agents scanning logs/APIs (with tools like Splunk or LogScape) is maybe more efficient.
The last talk of the day aimed at taking one step up and look at the impacts of the EMIR / Dodd Frank regulations on the participants in the Swaps market, involving speakers from (among others) MarkitServ, iSwap and Citi. It is still very early days to understand which shape the market will take, but the introduction of multiple possible execution venues (SEF) and multiple possible clearing venues (CCPs) causes some challenges for instance on the ability to manage trading limits / margins at the book level for the banks while potentially creating new opportunities for algorithmic trading in the long run. Consolidation of the actors (be it on the banking side or on the SEF/CCP side) is also definitely on the horizon since small shops will struggle to cope with all these changes in a cost effective manner.
To summarize, once again a very interesting STAC conference that definitely gave a good feel for the conversations currently happening in our industry, from new hardware platforms to the introduction of new middlewares, the hype of Big Data and finally changes in the way the business will run.
How do yo...
While this stat from Stack Overflow’s 2018 survey may alarm you, 4 out of 5 respondents consider software ...
Where we last left off, we began talking about the strengths of open source in the automotive industry, and why it’s so important for OEMs to “become software companies” and leverage i...