Ah, here comes the "monolithic vs modular" troll again...
Actually it does make sense to have a monolithic kernel tailored for your hardware. I am speaking about production machines. UNIX is supposed to be a stable OS, that you hardly ever need to reboot, which runs on a machine dedicated to it. In this sense, what are the advantages of modules (hardware drivers, crypto libraries, kernel-level binaries, whatever) over a monolithic kernel ? I don't see many, since the modules are loaded when the machine boots, and ideally, are never unloaded (since loading/unloading a kernel plugin is a critical task for the system, and most of the production systems can't afford the luxe of a system failure).
Furthermore, loading/unloading/handling modules has to be done by userland programs, executables on the hard disk, which ones are bound to user and group permissions and the filesystem's security strategy like any other userland program. There is an inner security flaw in this approach. If you haven't yet you'll soon notice that a good amount of Linux exploits concern kernel modules.
Another reason why I tend to prefer monolithic kernels, is that they typically take quite less space in memory compared to their modular equivalents (once all the modules are loaded, I mean.) And with this smaller memory footprint goes a (little) faster speed of execution. The BSD kernels are all monolithic. FreeBSD has the modularity feature, but it's not as widely used at all as in Linux, and many people (especially those who run and administrate business machines) recommend not to use it and stick with a custom kernel that is perfectly tailored to suit your hardware (although the OpenBSD guys, with their well-known focus on security, recommend to keep the default monolithic kernel that comes with the installation).