Hello all,
I recall buying a 1 GB drive in the days when that was pretty big, and I
ended up putting a benign virus (aka pre-boot loader) to translate some
numbers to let the machine access the thing. I do not want to make that
mistake again.
Consider a perfectly serviceable 1.8GHz machine from just enough years
ago that a 100 GB drive was considered "huge". What are the options for
building a Linux system that can access 3-500 GB drives w/o going
haywire when it writes too far some day? I have read some statements
that Linux does not use the bios and is therefore immune to any such
problems, but I find that hard to believe. Somewhat more sensible was a
suggestion to disable on-board controllers in favor of an add-on
controller that knows what to make of the drives. Would such a card
auto-detect parameters, and/or include its own BIOS? Is there any
reason not to do something like that? My main concern is to be able to
later put the drive in another machine w/o worrying about special
drivers.
Yes, I could simply buy a machine (and might do so soon anyway), but I
hate to scrap perfectly good computers, especially given how well they
run Linux.
Bill
Wilhelm K. Schwab, Ph.D.
University of Florida
Department of Anesthesiology
PO Box 100254
Gainesville, FL 32610-0254
Email: [log in to unmask]
Tel: (352) 846-1285
FAX: (352) 392-7029
|