HPC a future cottage industry?

Discussion in 'Storage' started by Nomen Nescio, Aug 9, 2010.

  1. Nomen Nescio

    Nomen Nescio Guest

    I worked in geophysical processing industry. Pretty much
    all of them now use linux clusters of 128-1000 nodes
    attached to a giant SAN. While there are a few algorithms
    that need brute-force gigaflops, much is stifled by the
    data shuffling.
    Now I can see a change coming. I know a guy, who is going
    into business by himself. He has built (with little help)
    a box under his desk with 4 Magny Cours, 128 GB RAM and
    28 TB of raid (plus a SSD of 256 GB). This would be
    sufficient to run small projects. He claimed it has
    faster throughput than the industry-standard
    "Mainframe" as he snidely calls them.
    Do you think this could catch on?
    Nomen Nescio, Aug 9, 2010
    1. Advertisements

  2. There are languages designed around the idea of partitioning,
    such that the programmer doesn't have to follow the details
    of the partitions. They tend to look a little different from
    the popular serial languages.
    That is always the problem.
    -- glen
    glen herrmannsfeldt, Aug 9, 2010
    1. Advertisements

  3. Nomen Nescio

    mac Guest

    Now I can see a change coming. I know a guy, who is going
    Not as storage (i read this in comp.storage).

    Catch on how? Would he ship these boxes? I expect others already do.
    Rent out time like a mainframe? Likely to get mainframe-like throughput.
    mac, Aug 14, 2010
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.