High throughput screening (HTS) has long been an important first step in the search for new drug molecules. Here, the goal is to identify high-quality hits that can be easily optimized into lead compounds and ultimately, safe and effective medicines.
In recent years HTS has become something of a numbers game – after all, with a larger screening collection, you’ve got a greater chance of success, right? With infinite time and a limitless budget, perhaps so. But in reality, these resources are often in short supply, and screening needs to be as resource efficient as possible. Here, smaller, more manageable compound libraries that cover a chemically diverse area of pharmaceutically relevant chemical space can be more effective.
Is your collection pharmaceutically relevant?
Chemical space is pretty big – and that’s putting it mildly. Even if we restrict the search to small ‘drug-like’ organic molecules based on just a handful of different atoms (carbon, oxygen, nitrogen, hydrogen and sulfur), it’s thought that the number of potential molecules are on the order of 1060. Restrict this further to the number of compounds that have actually been synthesized, and we reach a mere 90 million compounds, according to PubChem.
In order to whittle this down to a pharmaceutically relevant collection it’s important to consider the qualities of a compound that make for a valuable lead.
Well-established guides such as Lipinski’s Rule of Five are still highly effective at focusing the choice of drug-like compounds to include in screening libraries – although it’s important to keep an open mind when considering non-oral routes of administration.
However, even the most drug-like screening collections will contain a large proportion of molecular dead-ends. Compound libraries can be made more useful by removing structures with known reactive or toxic functional groups or those that may interfere with biological assays – commonly known as ‘frequent hitters’.
In recent years the increased use of computational filters has made this process much easier. For example, ‘Rapid Elimination of Swill’ (REOS) filters can be effective at identifying substructures known undesirable absorption, distribution, metabolism, elimination and toxicity problems.
However, computational filtering is most effective when supported by the knowledge and experience of seasoned medicinal chemists. While filtering can reduce risk of failure, it will also remove viable compounds. Take the recent discussion over the value of computational PAINS filters, for example.
Are you screening a structurally diverse collection?
Some of the largest screening libraries in Big Pharma are on the order of millions of molecules. But size isn’t everything. Screening large numbers of structurally similar compounds is inefficient and can waste valuable time and resources.
A more efficient approach could be through the use of smaller, yet smarter libraries, containing a structurally and chemically diverse set of compounds. Here, it can be useful to think of a collection in terms of its pharmacophoric diversity – the ways in which a molecule can interact with a target and where these binding sites are located on the molecule’s surface. Libraries that include a wide variety of binding features such as hydrogen bond donor and acceptor sites, pi-interactions and electrostatic, as well as large degree of variation in terms of framework and stereochemistry, can significantly improve the chances of discovering effective hits.
When it comes to implementing an efficient screening campaign, ask yourself whether your compound library is as structurally diverse and pharmaceutically relevant as it can be.
To learn more about effective screening library design and the dangers of an over-reliance on computational filters, read our article in full.