The Summer, 2005 issue of Biomedical Computation Review lists the following as the top ten challenges of the next decade in computational biomedicine:
1. In silico screening of drug compounds
2. Predicting function from structure of complex molecules
3. Prediction of protein structure (from sequence)
4. Accurate, efficient and comprehensive dynamic models of the spread of infectious disease
5. Intelligent systems for mining biomedical literature
6. Complete annotation of the genomes of selected model organisms
7. Improved computerization of the health-care system
8. Making systems biology a reality
9. Tuning biomedical computing software to computer hardware
10. Promoting the use of computational biology tools in education
1, 2 and 3 may be more distant than the next decade, while 9 seems like more of a continuous process than a specific, finite challenge.
Full resolution of 5 requires really solid natural language processing, but in the meantime, tools will be developed that really accelerate literature mining by humans. Effectively, "semiautomated" mining is already becoming a reality.
8 has been more of a standards issue than anything else, and there are attempts to have everyone use the same formats for their systems biology models and information as a consequence.
10 would be a good gain -- I was barely exposed to the available computational biology tools when I was an undergrad, not really touching on their use in any detail until grad school. Given the open set of data which biology comprises, it would be helpful to orient students to the tools available to them to find the information they need.
2006-01-11 09:21 pm UTC
I think biologists will find 9 isn't actually a good goal, at least not for computation heavy needs. Biological specific compute engines would be faster then general purpose hardware for the first generation, on par the second, and by the third would be slower and less advanced. The only compute engine that has been made specificly for a certain task and not been obsolete in 3 generations has really been the GPU (Graphic Processing Unit). All other offload compute engines that are designed to do compute of a specific type faster have been beaten by general purpose CPUs. The place where HW does make some sense isn't for a performance push but a cost push. If you have a very constraned algorythm and a very specific performance requirements (ie: as fast as possable isn't a specific performance requirement!) making an ASIC to do the work is a reasonable idea. The ipod does it, as do most digital video solutions. These days they put the codec in HW because they can make a chip simpler then a processor that will do it just fast enough. I don't think biology is like that with the exception of maybe bio interfaces. I suspect that the HW that interacts with a biological process in realtime would be OK with doing it at the speed of the biological process and no faster. However, any kind of anaylsys and general purpose HW with a good programmer will give you a sustained multigenerational advantage.
2006-01-11 09:57 pm UTC
Yeah; the rest of that point reads as if the author doesn't understand that cost:benefit ratio of trying to tune software to hardware. Even the example he cites (a case where access to storage is a bottleneck that limits full use of processing power) suggests more of a hardware setup issue. Given the workload we already pile on our programmers to keep up with feature requests and expanding datasets, I have trouble imagining that it would be more cost effective for most bioinformatics research to burn programmer time on this variety of optimization rather than just buying more computers.