Thompson Sampling#

The Thompson Sampling module implements a unified, flexible Thompson Sampling framework for chemical library exploration. It provides pluggable selection strategies, warmup approaches, and evaluators for efficiently screening ultra-large combinatorial libraries.

The module follows a composition-based architecture where the core ThompsonSampler class accepts pluggable components:

  • Selection Strategies - How to choose reagents during search

  • Warmup Strategies - How to initialize priors before search

  • Evaluators - How to score generated compounds

Module Architecture#

digraph ThompsonSamplingModule { rankdir=TB; node [shape=box, style="rounded,filled", fontname="Helvetica", fontsize=10]; edge [fontname="Helvetica", fontsize=9]; nodesep=0.3; ranksep=0.5; // Core at top subgraph cluster_core { label="Core"; style=filled; color="#FFF8DC"; ThompsonSampler [label="ThompsonSampler", fillcolor="#FFD700"]; subgraph cluster_core_helpers { label=""; style=invis; rank=same; Reagent [label="Reagent", fillcolor="#FFFACD"]; ParallelEvaluator [label="ParallelEvaluator", fillcolor="#FFFACD"]; } } // Strategies - vertical list subgraph cluster_strategies { label="Selection Strategies"; style=filled; color="#FFE4E1"; BaseStrategy [label="SelectionStrategy (ABC)", fillcolor="white"]; Greedy [label="GreedySelection", fillcolor="#FFB6C1"]; RouletteWheel [label="RouletteWheelSelection", fillcolor="#FFB6C1"]; UCB [label="UCBSelection", fillcolor="#FFB6C1"]; EpsilonGreedy [label="EpsilonGreedySelection", fillcolor="#FFB6C1"]; BayesUCB [label="BayesUCBSelection", fillcolor="#FFB6C1"]; // Force vertical BaseStrategy -> Greedy [style=invis]; Greedy -> RouletteWheel [style=invis]; RouletteWheel -> UCB [style=invis]; UCB -> EpsilonGreedy [style=invis]; EpsilonGreedy -> BayesUCB [style=invis]; } // Warmup - vertical list subgraph cluster_warmup { label="Warmup Strategies"; style=filled; color="#E0FFFF"; BaseWarmup [label="WarmupStrategy (ABC)", fillcolor="white"]; Balanced [label="BalancedWarmup", fillcolor="#00CED1"]; Standard [label="StandardWarmup", fillcolor="#AFEEEE"]; Enhanced [label="EnhancedWarmup", fillcolor="#AFEEEE"]; // Force vertical BaseWarmup -> Balanced [style=invis]; Balanced -> Standard [style=invis]; Standard -> Enhanced [style=invis]; } // Evaluators - vertical list subgraph cluster_evaluators { label="Evaluators"; style=filled; color="#E6E6FA"; BaseEvaluator [label="Evaluator (ABC)", fillcolor="white"]; Lookup [label="LookupEvaluator", fillcolor="#DDA0DD"]; DB [label="DBEvaluator", fillcolor="#DDA0DD"]; FP [label="FPEvaluator", fillcolor="#DDA0DD"]; MW [label="MWEvaluator", fillcolor="#DDA0DD"]; ROCS [label="ROCSEvaluator", fillcolor="#D8BFD8"]; FRED [label="FredEvaluator", fillcolor="#D8BFD8"]; ML [label="MLClassifierEvaluator", fillcolor="#D8BFD8"]; // Force vertical BaseEvaluator -> Lookup [style=invis]; Lookup -> DB [style=invis]; DB -> FP [style=invis]; FP -> MW [style=invis]; MW -> ROCS [style=invis]; ROCS -> FRED [style=invis]; FRED -> ML [style=invis]; } // Inheritance arrows (visible) BaseStrategy -> Greedy [style=dashed, constraint=false]; BaseStrategy -> RouletteWheel [style=dashed, constraint=false]; BaseStrategy -> UCB [style=dashed, constraint=false]; BaseStrategy -> EpsilonGreedy [style=dashed, constraint=false]; BaseStrategy -> BayesUCB [style=dashed, constraint=false]; BaseWarmup -> Balanced [style=dashed, constraint=false]; BaseWarmup -> Standard [style=dashed, constraint=false]; BaseWarmup -> Enhanced [style=dashed, constraint=false]; BaseEvaluator -> Lookup [style=dashed, constraint=false]; BaseEvaluator -> DB [style=dashed, constraint=false]; BaseEvaluator -> FP [style=dashed, constraint=false]; BaseEvaluator -> MW [style=dashed, constraint=false]; BaseEvaluator -> ROCS [style=dashed, constraint=false]; BaseEvaluator -> FRED [style=dashed, constraint=false]; BaseEvaluator -> ML [style=dashed, constraint=false]; // Core connections ThompsonSampler -> BaseStrategy [style=bold]; ThompsonSampler -> BaseWarmup [style=bold]; ThompsonSampler -> BaseEvaluator [style=bold]; }

Quick Start#

Using presets (recommended):

Simplest usage with presets#
from TACTICS.library_enumeration import SynthesisPipeline
from TACTICS.library_enumeration.smarts_toolkit import ReactionConfig, ReactionDef
from TACTICS.thompson_sampling import ThompsonSampler, get_preset
from TACTICS.thompson_sampling.core.evaluator_config import LookupEvaluatorConfig

# 1. Create synthesis pipeline (single source of truth)
rxn_config = ReactionConfig(
    reactions=[ReactionDef(
        reaction_smarts="[C:1](=O)[OH].[NH2:2]>>[C:1](=O)[NH:2]",
        step_index=0
    )],
    reagent_file_list=["acids.smi", "amines.smi"]
)
pipeline = SynthesisPipeline(rxn_config)

# 2. Get preset configuration
config = get_preset(
    "fast_exploration",
    synthesis_pipeline=pipeline,
    evaluator_config=LookupEvaluatorConfig(ref_filename="scores.csv"),
    mode="minimize",
    num_iterations=1000
)

# 3. Create sampler and run optimization
sampler = ThompsonSampler.from_config(config)
warmup_df = sampler.warm_up(num_warmup_trials=config.num_warmup_trials)
results_df = sampler.search(num_cycles=config.num_ts_iterations)
sampler.close()

print(results_df.sort("score").head(10))

Direct sampler control:

Manual sampler setup#
from TACTICS.library_enumeration import SynthesisPipeline
from TACTICS.library_enumeration.smarts_toolkit import ReactionConfig, ReactionDef
from TACTICS.thompson_sampling.core.sampler import ThompsonSampler
from TACTICS.thompson_sampling.strategies import RouletteWheelSelection
from TACTICS.thompson_sampling.warmup import BalancedWarmup
from TACTICS.thompson_sampling.factories import create_evaluator
from TACTICS.thompson_sampling.core.evaluator_config import LookupEvaluatorConfig

# 1. Create synthesis pipeline
rxn_config = ReactionConfig(
    reactions=[ReactionDef(
        reaction_smarts="[C:1](=O)[OH].[NH2:2]>>[C:1](=O)[NH:2]",
        step_index=0
    )],
    reagent_file_list=["acids.smi", "amines.smi"]
)
pipeline = SynthesisPipeline(rxn_config)

# 2. Create components
strategy = RouletteWheelSelection(mode="maximize", alpha=0.1, beta=0.05)
warmup = BalancedWarmup(observations_per_reagent=3)
evaluator = create_evaluator(LookupEvaluatorConfig(ref_filename="scores.csv"))

# 3. Create sampler with pipeline
sampler = ThompsonSampler(
    synthesis_pipeline=pipeline,
    selection_strategy=strategy,
    warmup_strategy=warmup,
    batch_size=10
)

# 4. Set evaluator and run
sampler.set_evaluator(evaluator)
warmup_df = sampler.warm_up(num_warmup_trials=3)
results_df = sampler.search(num_cycles=1000)
sampler.close()

ThompsonSampler#

The main class for Thompson Sampling optimization.

The ThompsonSampler is the central orchestrator that coordinates selection strategies, warmup strategies, and evaluators to efficiently explore combinatorial chemical libraries.

Dependencies

Requires these components:

Depends on: SynthesisPipeline, SelectionStrategy, WarmupStrategy, Evaluator

Constructor#

Parameters#

Parameter

Type

Required

Description

synthesis_pipeline

SynthesisPipeline

Yes

Pipeline containing reaction config and reagent files (single source of truth).

selection_strategy

SelectionStrategy

Yes

Selection strategy instance (Greedy, RouletteWheel, UCB, etc.).

warmup_strategy

WarmupStrategy

No

Warmup strategy. Default: StandardWarmup().

batch_size

int

No

Compounds to sample per cycle. Default: 1.

processes

int

No

CPU cores for parallel evaluation. Default: 1 (sequential).

min_cpds_per_core

int

No

Min compounds per core before batch evaluation. Default: 10.

max_resamples

int

No

Stop after this many consecutive duplicates. Default: None.

log_filename

str

No

Path for log file output.

product_library_file

str

No

Pre-enumerated product CSV for testing mode.

use_boltzmann_weighting

bool

No

Use Boltzmann-weighted updates (legacy RWS). Default: False.

Factory Method: from_config#

Create a sampler from a Pydantic configuration.

Parameters#

Parameter

Type

Required

Description

config

ThompsonSamplingConfig

Yes

Configuration with strategy, warmup, and evaluator settings.

Returns

Type

Description

ThompsonSampler

Configured sampler ready for warmup and search.

Example

from TACTICS.library_enumeration import SynthesisPipeline
from TACTICS.library_enumeration.smarts_toolkit import ReactionConfig, ReactionDef
from TACTICS.thompson_sampling.core.sampler import ThompsonSampler
from TACTICS.thompson_sampling.config import ThompsonSamplingConfig
from TACTICS.thompson_sampling.strategies.config import RouletteWheelConfig
from TACTICS.thompson_sampling.core.evaluator_config import LookupEvaluatorConfig

# Create synthesis pipeline
rxn_config = ReactionConfig(
    reactions=[ReactionDef(reaction_smarts="[C:1](=O)[OH].[NH2:2]>>[C:1](=O)[NH:2]", step_index=0)],
    reagent_file_list=["acids.smi", "amines.smi"]
)
pipeline = SynthesisPipeline(rxn_config)

# Create Thompson Sampling config
config = ThompsonSamplingConfig(
    synthesis_pipeline=pipeline,
    num_ts_iterations=1000,
    strategy_config=RouletteWheelConfig(mode="maximize"),
    evaluator_config=LookupEvaluatorConfig(ref_filename="scores.csv")
)

sampler = ThompsonSampler.from_config(config)

Core Methods#

warm_up#

Initialize reagent posteriors with warmup evaluations.

Parameters#

Parameter

Type

Required

Description

num_warmup_trials

int

No

Trials per reagent. Default: 3.

Returns

Type

Description

polars.DataFrame

Warmup results with columns: score, SMILES, Name.

evaluate#

Evaluate a single reagent combination.

Parameters#

Parameter

Type

Required

Description

choice_list

list[int]

Yes

Reagent indices for each component.

Returns

Type

Description

tuple[str, str, float]

(product_smiles, product_name, score).

Setup Methods#

Method

Description

set_evaluator(evaluator)

Set the scoring evaluator.

load_product_library(library_file)

Load pre-enumerated products for testing.

close()

Cleanup multiprocessing resources.

Note

The synthesis_pipeline is now passed to the constructor and is the single source of truth for reactions and reagents. The old read_reagents() and set_reaction() methods have been removed.

Selection Strategies#

Selection strategies determine how reagents are chosen during the search phase. All strategies implement the SelectionStrategy abstract base class.

digraph StrategySelection { rankdir=TB; node [shape=box, style="rounded,filled", fontname="Helvetica", fontsize=10]; nodesep=0.2; ranksep=0.3; SelectReagent [label="Select Reagent", fillcolor="#FFFACD"]; Greedy [label="Greedy (argmax/argmin)", fillcolor="#90EE90"]; RouletteWheel [label="RouletteWheel (Thermal Cycling)", fillcolor="#ADD8E6"]; UCB [label="UCB (Confidence Bounds)", fillcolor="#FFB6C1"]; BayesUCB [label="BayesUCB (Student-t + CATS)", fillcolor="#E6E6FA"]; EpsilonGreedy [label="EpsilonGreedy (Random/Greedy)", fillcolor="#FFFACD"]; // Force vertical layout SelectReagent -> Greedy; Greedy -> RouletteWheel [style=invis]; RouletteWheel -> UCB [style=invis]; UCB -> BayesUCB [style=invis]; BayesUCB -> EpsilonGreedy [style=invis]; SelectReagent -> RouletteWheel; SelectReagent -> UCB; SelectReagent -> BayesUCB; SelectReagent -> EpsilonGreedy; }

SelectionStrategy (Base Class)#

Abstract base class for all selection strategies. Extend this to create custom strategies.

Required Methods#

Method

Description

select_reagent(reagent_list, disallow_mask, rng, ...)

Select one reagent from the list.

select_batch(reagent_list, batch_size, ...)

Select multiple reagents (optional override).

GreedySelection#

Simple greedy selection using argmax/argmin of sampled scores.

Extends: SelectionStrategy

  • Fast convergence but may get stuck in local optima

  • Best for: Simple optimization landscapes, limited budgets

Parameters#

Parameter

Type

Required

Description

mode

str

No

"maximize" or "minimize". Default: "maximize".

Example

from TACTICS.thompson_sampling.strategies import GreedySelection

strategy = GreedySelection(mode="maximize")
# For docking scores (lower is better)
strategy = GreedySelection(mode="minimize")

RouletteWheelSelection#

Roulette wheel selection with thermal cycling and Component-Aware Thompson Sampling (CATS).

Extends: SelectionStrategy

  • Boltzmann-weighted selection with adaptive temperature control

  • Component rotation for systematic exploration

  • CATS: Shannon entropy-based criticality analysis

  • Best for: Complex multi-modal landscapes, large libraries

Parameters#

Parameter

Type

Required

Description

mode

str

No

"maximize", "minimize", "maximize_boltzmann", or "minimize_boltzmann".

alpha

float

No

Base temperature for heated component. Default: 0.1.

beta

float

No

Base temperature for cooled components. Default: 0.05.

exploration_phase_end

float

No

Fraction before CATS starts. Default: 0.20.

transition_phase_end

float

No

Fraction when CATS fully applied. Default: 0.60.

min_observations

int

No

Min observations before trusting criticality. Default: 5.

Example

from TACTICS.thompson_sampling.strategies import RouletteWheelSelection

# Standard thermal cycling
strategy = RouletteWheelSelection(
    mode="maximize",
    alpha=0.1,
    beta=0.05
)

# Higher exploration
strategy = RouletteWheelSelection(
    mode="maximize",
    alpha=0.2,
    beta=0.1
)

UCBSelection#

Upper Confidence Bound selection with deterministic behavior.

Extends: SelectionStrategy

  • Balances exploitation and exploration via confidence bounds

  • Best for: Situations requiring deterministic, reproducible behavior

Parameters#

Parameter

Type

Required

Description

mode

str

No

"maximize" or "minimize". Default: "maximize".

c

float

No

Exploration parameter. Higher = more exploration. Default: 2.0.

Example

from TACTICS.thompson_sampling.strategies import UCBSelection

strategy = UCBSelection(mode="maximize", c=2.0)
# Higher exploration
strategy = UCBSelection(mode="maximize", c=4.0)

EpsilonGreedySelection#

Simple exploration strategy with decaying epsilon.

Extends: SelectionStrategy

  • Random selection with probability epsilon, greedy otherwise

  • Best for: Baseline comparisons, simple exploration needs

Parameters#

Parameter

Type

Required

Description

mode

str

No

"maximize" or "minimize". Default: "maximize".

epsilon

float

No

Initial exploration probability [0, 1]. Default: 0.1.

decay

float

No

Decay rate per iteration. Default: 0.995.

Example

from TACTICS.thompson_sampling.strategies import EpsilonGreedySelection

# 20% exploration with decay
strategy = EpsilonGreedySelection(
    mode="maximize",
    epsilon=0.2,
    decay=0.995
)

BayesUCBSelection#

Bayesian UCB with Student-t quantiles and CATS integration.

Extends: SelectionStrategy

  • Theoretically grounded Bayesian confidence bounds

  • Percentile-based thermal cycling (analog to temperature)

  • Component-aware exploration based on Shannon entropy

  • Best for: Complex landscapes, escaping local optima

  • Requires: scipy

Parameters#

Parameter

Type

Required

Description

mode

str

No

"maximize" or "minimize". Default: "maximize".

initial_p_high

float

No

Base percentile for heated component [0.5, 0.999]. Default: 0.90.

initial_p_low

float

No

Base percentile for cooled components [0.5, 0.999]. Default: 0.60.

exploration_phase_end

float

No

Fraction before CATS starts. Default: 0.20.

transition_phase_end

float

No

Fraction when CATS fully applied. Default: 0.60.

min_observations

int

No

Min observations before trusting criticality. Default: 5.

Example

from TACTICS.thompson_sampling.strategies import BayesUCBSelection

strategy = BayesUCBSelection(mode="maximize")

# More aggressive exploration
strategy = BayesUCBSelection(
    mode="maximize",
    initial_p_high=0.95,
    initial_p_low=0.70,
    exploration_phase_end=0.25
)

Warmup Strategies#

Warmup strategies determine how reagent combinations are sampled to initialize posteriors before the main search begins.

digraph WarmupStrategies { rankdir=TB; node [shape=box, style="rounded,filled", fontname="Helvetica", fontsize=10]; nodesep=0.3; ranksep=0.4; Init [label="Initialize Priors", fillcolor="#FFFACD"]; Balanced [label="BalancedWarmup (Recommended)\nK obs per reagent", fillcolor="#00CED1"]; Standard [label="StandardWarmup\nRandom partners", fillcolor="#AFEEEE"]; Enhanced [label="EnhancedWarmup\nParallel pairing (Legacy RWS)", fillcolor="#AFEEEE"]; Init -> Balanced; Init -> Standard; Init -> Enhanced; }

WarmupStrategy (Base Class)#

Abstract base class for warmup strategies.

Required Methods#

Method

Description

generate_warmup_combinations(reagent_lists, num_trials, disallow_tracker)

Generate list of combinations to evaluate.

get_expected_evaluations(reagent_lists, num_trials)

Estimate number of evaluations.

get_name()

Return strategy name.

StandardWarmup#

Standard warmup testing each reagent with random partners.

Extends: WarmupStrategy

  • Simple and straightforward

  • Ensures all reagents evaluated

  • Expected evaluations: sum(reagent_counts) * num_trials

Parameters#

Parameter

Type

Required

Description

seed

int

No

Random seed for reproducibility.

EnhancedWarmup (Legacy)#

Stochastic parallel pairing with shuffling from the original RWS algorithm.

Extends: WarmupStrategy

  • Parallel pairing of reagents across components

  • Required for replicating legacy RWS results

  • Best for: legacy_rws_maximize and legacy_rws_minimize presets

Parameters#

Parameter

Type

Required

Description

seed

int

No

Random seed for reproducibility.

Evaluators#

Evaluators score compounds based on various criteria. Choose based on your data source and computational requirements.

digraph Evaluators { rankdir=TB; node [shape=box, style="rounded,filled", fontname="Helvetica", fontsize=10]; nodesep=0.2; ranksep=0.3; Evaluate [label="Evaluate Compound", fillcolor="#FFFACD"]; // Fast evaluators fast_label [label="Fast", shape=plaintext, fontsize=9]; Lookup [label="LookupEvaluator (CSV)", fillcolor="#90EE90"]; DB [label="DBEvaluator (SQLite)", fillcolor="#90EE90"]; // Computational evaluators compute_label [label="Computational", shape=plaintext, fontsize=9]; FP [label="FPEvaluator (Fingerprints)", fillcolor="#ADD8E6"]; MW [label="MWEvaluator (Mol Weight)", fillcolor="#ADD8E6"]; // Slow evaluators slow_label [label="Slow (use processes>1)", shape=plaintext, fontsize=9]; ROCS [label="ROCSEvaluator (3D Shape)", fillcolor="#E6E6FA"]; FRED [label="FredEvaluator (Docking)", fillcolor="#E6E6FA"]; ML [label="MLClassifierEvaluator", fillcolor="#E6E6FA"]; // Force vertical layout Evaluate -> fast_label [style=invis]; fast_label -> Lookup [style=invis]; Lookup -> DB [style=invis]; DB -> compute_label [style=invis]; compute_label -> FP [style=invis]; FP -> MW [style=invis]; MW -> slow_label [style=invis]; slow_label -> ROCS [style=invis]; ROCS -> FRED [style=invis]; FRED -> ML [style=invis]; // Visible connections Evaluate -> Lookup; Evaluate -> DB; Evaluate -> FP; Evaluate -> MW; Evaluate -> ROCS; Evaluate -> FRED; Evaluate -> ML; }

Evaluator (Base Class)#

Abstract base class for all evaluators.

Required Methods#

Method

Description

evaluate(input)

Score a compound (accepts Mol or product_name depending on evaluator).

counter (property)

Number of evaluations performed.

LookupEvaluator#

Fast evaluator that looks up pre-computed scores from a CSV file.

Extends: Evaluator

  • Use for: Pre-computed scores, benchmarking

  • Recommendation: Use processes=1 (parallel overhead exceeds lookup time)

Config Parameters (LookupEvaluatorConfig)#

Parameter

Type

Required

Description

ref_filename

str

Yes

Path to CSV file with scores.

score_col

str

No

Column name for scores. Default: "Scores".

compound_col

str

No

Column name for compound IDs. Default: "Product_Code".

Example

from TACTICS.thompson_sampling.core.evaluator_config import LookupEvaluatorConfig
from TACTICS.thompson_sampling.factories import create_evaluator

config = LookupEvaluatorConfig(
    ref_filename="scores.csv",
    score_col="binding_affinity"
)
evaluator = create_evaluator(config)

DBEvaluator#

Fast evaluator using SQLite database for large datasets.

Extends: Evaluator

  • Use for: Large pre-computed datasets (millions of compounds)

  • Recommendation: Use processes=1

Config Parameters (DBEvaluatorConfig)#

Parameter

Type

Required

Description

db_filename

str

Yes

Path to SQLite database.

db_prefix

str

No

Key prefix for lookups. Default: "".

FPEvaluator#

Evaluator using Morgan fingerprint Tanimoto similarity.

Extends: Evaluator

  • Use for: Similarity-based virtual screening

  • Returns: Tanimoto similarity [0, 1]

Config Parameters (FPEvaluatorConfig)#

Parameter

Type

Required

Description

query_smiles

str

Yes

Reference molecule SMILES.

radius

int

No

Morgan fingerprint radius. Default: 2.

n_bits

int

No

Fingerprint bit length. Default: 2048.

MWEvaluator#

Simple evaluator returning molecular weight. Primarily for testing.

Extends: Evaluator

ROCSEvaluator#

3D shape-based evaluator using OpenEye ROCS.

Extends: Evaluator

  • Use for: Shape-based virtual screening

  • Requires: OpenEye Toolkit license

  • Recommendation: Use processes>1 for parallel evaluation

Config Parameters (ROCSEvaluatorConfig)#

Parameter

Type

Required

Description

query_molfile

str

Yes

Path to reference structure (.sdf).

max_confs

int

No

Max conformers to generate. Default: 50.

FredEvaluator#

Molecular docking evaluator using OpenEye FRED.

Extends: Evaluator

  • Use for: Structure-based virtual screening

  • Requires: OpenEye Toolkit license

  • Recommendation: Use processes>1 for parallel evaluation

  • Mode: minimize (lower docking scores = better)

Config Parameters (FredEvaluatorConfig)#

Parameter

Type

Required

Description

design_unit_file

str

Yes

Path to receptor file (.oedu).

max_confs

int

No

Max conformers to generate. Default: 100.

MLClassifierEvaluator#

Evaluator using a trained scikit-learn classifier.

Extends: Evaluator

  • Use for: ML-based scoring with trained models

  • Requires: scikit-learn, trained model pickle file

Config Parameters (MLClassifierEvaluatorConfig)#

Parameter

Type

Required

Description

model_filename

str

Yes

Path to pickled sklearn model.

Strategy Selection Guide#

Choose the right strategy based on your use case:

Strategy

Best For

Pros

Cons

Greedy

Simple landscapes, limited budgets

Fast convergence

Can get stuck in local optima

RouletteWheel

Complex multi-modal landscapes

Thermal cycling, CATS, adaptive

More parameters to tune

UCB

Deterministic optimization needs

Theoretically grounded

Less stochastic

BayesUCB

Complex landscapes, escaping optima

Bayesian bounds, CATS

Requires scipy

EpsilonGreedy

Baseline comparisons

Very simple

Less sophisticated

Evaluator Selection Guide#

Choose based on your data source and computational requirements:

Fast Evaluators (use processes=1):

  • LookupEvaluator: Pre-computed scores in CSV

  • DBEvaluator: Pre-computed scores in SQLite

Computational Evaluators:

  • FPEvaluator: Fingerprint similarity (fast)

  • MWEvaluator: Molecular weight (testing only)

Slow Evaluators (use processes>1):

  • ROCSEvaluator: 3D shape similarity (requires OpenEye)

  • FredEvaluator: Molecular docking (requires OpenEye)

  • MLClassifierEvaluator: ML model predictions

See the Configuration System page for preset configurations and detailed examples.