Did we solve the Holy Grail? Maybe not completely, but it sure is better than anything else that's been released thus far. That, my friend, is called the "secret sauce". It's not constructive conversation to say at least :) I've clearly asked about Intel I/O Meter report on the same hardware and you've ended as usual telling how much your customers love what they got from you. I did not say anything from what you're been saying I did. Unfortunately either they did not or you just don't know. I'm interested how your engineers did solve "Holy Grail" of a log-structured file systems: random reads. ZFS is a classic file system with log (ZIL) being just an accelerator to handle writes better so it does not have any "log only" related processing overhead. Magic starts to happen with 100% random reads where log-structured file systems deal with extra level of fragmentation (multiple writes to the same file were stored in the different places on-disk) and extra metadata access (unless redirector table is stored in memory and it's pain for multi-TB LUs). Yes, when serving hot data from cache all properly implemented systems provide similar performance. And I haven't seen anything yet to the contrary. You keep bashing Nimble and saying how much better ZFS would be, but customers have proven otherwise. We've gone up against all kinds of ZFS offshoots and have yet to see one that keeps up, let alone has the scalability and feature set Nimble brings to the table. Seriously KOOLER, let's stop the science projects and just deal with real storage systems. Btw, how about we run 100% random write workloads for a couple hours/days/weeks and see how ZFS responds? Oh wait, we already do.it does the same thing as WAFL -> drops to its knees. Otherwise, we're both serving up non-existent cache data. This is assuming, of course, that prior to the read request we populated the system with data for a while to make sure it's on disk. You know the answer already you're losing 5-10x times.Īre we talking about a ZFS box serving up block traffic to said host? Cause if so, I do know the answer.and it doesn't look pretty for your ZFS toy. FreeBSD running ZFS 28 on the same hardware. Intel I/O Meter with 4KB reads 100% random for 30+ minutes run 8 workers and 64 I/Os in a queue. So while it's certainly higher latency than cached reads, I'd argue with the very poor rating, unless that implies anything that's not sum-millisecond response times. As with my response to John, while performance of non-cached reads is certainly not going to be at SSD latency, our LFS implementation does have some tweaks to reduce the impact compared to competitive systems. If CASL indeed is a log-structured FS it just *HAS* to be very poor with non-cached writes. As far as video if its uncompressed I can see you guys being handly, but really a LSI enignmo or other generic modular is going to be a lot cheaper for streaming throughput workloads. While Nimble isn't purpose-built for video, that doesn't mean it doesn't handle it exceptionally well.ġ00% pure random read for ediscovey is going to get a lot of cache misses. As for streaming media, I think you'd be surprised at just how good CASL works in those environments and how ecstatic video editing/streaming customers have been with Nimble. So, it's actually AMAZING at small block random I/O (read and/or write). The stripe size has to do with the internal structure of CASL, has nothing to do with front-end host load. Not saying it couldn't do it, just generally I'm deploying something completely different for this workload. Tiny block sustained random read (e-discovery) of why again, I'm not sure I'd deploy a Nimble. For this your really looking at a different device (and generally one with a filer head designed to support these operations).Ģ. Large streaming Media of which Nimble (While I"m sure is capable) is REALLY the wrong array to purchase. The only cases where I personally pay attention and use small or large stripe sizes.ġ.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |