Interview with Jean-Yves Berthou, EDF

Jean-Yves Berthou is responsible for IT Programmes across EDF Research and cialis for woman Development Division. EDF is one of Europe’s largest energy companies. He talks to PlanetHPC about High Performance Computing at EDF.

In which application areas do you use HPC?

HPC numerical simulation is recognised at EDF as an indispensable tool. It has been used for a long time in such important operational matters as optimising day-to-day production, or choosing the safest and generic viagra us most effective configurations for nuclear refuelling. However most of the advances towards higher levels of performance have been driven by the constant need to explain complex physical phenomena behind maintenance issues better, to assess the impact of potential modifications or new vendor technology and real viagra online to anticipate changes in operating or regulatory conditions.

Most of EDF’s codes have been developed in-house, but a few commercial codes are also used.

Why do you use HPC (e.g. numerical simulation is cheaper or more feasible that physical experimentation)?

In many cases physical experiments and viagra prescription testing are not possible, for example in the simulation of fuel assemblies and real viagra without prescription crack propagation in nuclear reactors and cialis price 50 mg in the optimisation of electricity production and buy propecia online trading. Even when experimentation is possible, numerical simulation can go beyond what is physically possible. Experimentation still remains an indispensable tool, complementary to simulation.

What are the cost benefits to your business of using HPC?

Simulation is very important across our business. It’s very hard to put a value on it, but it affects all our business areas. We see HPC as a technology which is essential to running our business.

Which HPC systems do you use in your business? What is their capital value? How scalable are your applications?

Mostly, EDF uses classical clusters from BULL, HP and from IBM. The HP clusters have performances ranging from 5 to 25 TFLOPs and the 12-rack Blue Gene L and P Systems have a performance of 130 TFLOPs. EDF has just announced the procurement of a 200 TFLOPs system for 2010, rising to 600 TFLOPs in 2011 and around 1 PFLOPs in 2012. The prices of these systems are not disclosed, but are the subject of competitive bids.

For your business does the Cloud offer a viable alternative to owning and online viagra managing your own systems?

EDF will not use third-party resources for its production requirements in the short term. It is collaborating with CEA (the French Commission for Atomic Energy) on distributed computing and, in particular, on the pooling of resources across an organisation. EDF plans to combine its own resources to create an in-house “Cloud” of virtual, pooled resources. It does not plan to use current Cloud offerings, such as those from Google and buy viagra Amazon because these are not yet mature enough for its requirements which are determined by performance, portability and where to buy cialis virtualisation considerations.

What are the challenges you see in the development of your HPC capability (e.g. scalability of applications, power consumption, cost of systems)?

The major challenges relate to portability of codes across different systems and female viagra pills scalability to 10,000 cores and us cialis beyond. The requirement is to port a complete simulation environment comprising coupled multiphysics codes to systems with large numbers of cores. Power consumption for such large systems is a major issue because it has a significant bearing on the total cost of ownership. This was a key reason behind the choice of the Blue Gene L/P systems by EDF in the past.

Are new languages and find discount viagra programming paradigms needed particularly as we move toward exascale systems?

This is an important issue. New languages and libraries are needed for large systems. EDF currently uses Fortran, C, C++, Python, MPI and OpenMP. It is not clear how usable these will be for systems with large numbers of cores. Whatever is used will need to address both heterogeneous cores and fault tolerance.

As an example, EDF’s structural mechanics codes represent an investment of over €100 million. This investment needs to be preserved as the codes are moved to new machines. This places constraints on development methodologies and cialis india pharmacy new languages. The expertise of staff also represents a major investment because it may take several years for a programmer to become proficient and productive in the various tools needed. Together these create inertia and constitute a real barrier to change. More and more, over the last 20 years, EDF has been developing Object Oriented codes. This approach cannot be abruptly changed.

What are your views on GPGPU computing?

GPGPU computing is clearly subject to the constraints discussed above. The need to change data structures to take advantage of such devices is an important consideration. EDF are looking closely at the feasibility of porting codes to GPGPUs, but will not use them for production codes within the next few years.

What are your views on reconfigurable (i.e. FPGA-based) computing particularly in light of developments at Convey?

Given the state of developments with GPGPUs and multicore devices, it seems that FPGAs have missed their window of opportunity because of difficulties in programming them.

Describe the HPC systems you would like to have available in 3, 5 and 10 years' time.

The market continues to demand an exponential increase in computer power. This will take us towards systems with performances of 10s and 100s of PFLOPs. These systems will need to run existing codes and be programmable using existing tools and languages. These systems will also need to have reasonable power consumption

In which HPC research areas would EU-funded programmes benefit your business?

There needs to be considerable effort expended in HPC to address challenges in the usability of systems. The development of new use cases and application codes for Exaflop systems will create new business opportunities for HPC. Programming methods will need to be reviewed. New pre and post-processing methodologies and applications will need to be developed. European initiatives need to be part of a global effort from the HPC-user community. Hardware developments should be considered. Notwithstanding this, software is the key issue to be tackled.

Our thanks Mr Berthou such an interesting insight into the computing needs of a large, multinational utility company.

Comment on this interview

Thierry Van der Pyl

Director

Components and Systems

DG INFSO

European Commission

 

PlanetHPC,  University of Edinburgh | James Clerk Maxwell Building | Mayfield Road | Edinburgh | EH9 3JZ

FP7 Logo
Web Design Edinburgh by Arcas