Skip to content

cran/SBOAtools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SBOAtools

SBOAtools is an R package for the Secretary Bird Optimization Algorithm (SBOA). The package supports both general-purpose continuous optimization and single-hidden-layer multilayer perceptron (MLP) training.

It is designed for researchers working in metaheuristic optimization, computational intelligence, and neural network training. The package allows users to apply SBOA either as a standalone optimizer or as a training algorithm for feed-forward neural networks.


Features

  • General-purpose continuous optimization with sboa()
  • Single-hidden-layer MLP training with sboa_mlp()
  • Built-in benchmark functions (F1-F23)
  • Benchmark inspection with list_benchmarks()
  • Benchmark retrieval with get_benchmark()
  • Prediction support via predict()
  • Convergence visualization via plot()
  • Model summaries via print()

Installation

During development, the package can be installed from the local source using:

devtools::install()

Then load the package with:

library(SBOAtools)

You can also install the development version from GitHub:

install.packages("remotes")
remotes::install_github("burakdilber/SBOAtools")

Main Functions

sboa()

Performs general-purpose continuous optimization using the Secretary Bird Optimization Algorithm.

sboa_mlp()

Trains a single-hidden-layer multilayer perceptron using the Secretary Bird Optimization Algorithm.

list_benchmarks()

Displays the built-in benchmark functions available in the package.

get_benchmark()

Returns a benchmark definition and its metadata.


Built-in Benchmark Functions

SBOAtools includes 23 built-in benchmark functions (F1-F23) for continuous optimization studies.

You can inspect the available benchmark functions with:

list_benchmarks()

You can retrieve a benchmark definition and its metadata with:

b <- get_benchmark("F9")
b$label
b$category
b$fn(rep(1, 5))

The built-in benchmark set includes unimodal, multimodal, and fixed-dimension functions.


Example 1: General Optimization with a User-Defined Function

library(SBOAtools)

sphere <- function(x) sum(x^2)

res <- sboa(
  fn = sphere,
  lower = rep(-10, 5),
  upper = rep(10, 5),
  n_agents = 10,
  max_iter = 20,
  seed = 123,
  verbose = FALSE
)

print(res)
plot(res)

res$value
res$par

Example 2: General Optimization with a Built-in Benchmark

library(SBOAtools)

list_benchmarks()

res2 <- sboa(
  fn = "F1",
  lower = rep(-100, 30),
  upper = rep(100, 30),
  n_agents = 30,
  max_iter = 500,
  seed = 123,
  verbose = FALSE
)

print(res2)
plot(res2)

You can also inspect a specific benchmark before optimization:

b <- get_benchmark("F14")
b$label
b$fixed_dim

Example 3: MLP Training with SBOA

library(SBOAtools)

set.seed(123)

X_train <- matrix(runif(40), nrow = 10, ncol = 4)
y_train <- matrix(runif(10), nrow = 10, ncol = 1)

fit_mlp <- sboa_mlp(
  X_train = X_train,
  y_train = y_train,
  hidden_dim = 3,
  n_agents = 10,
  max_iter = 20,
  lower = -1,
  upper = 1,
  seed = 123,
  verbose = FALSE
)

print(fit_mlp)
plot(fit_mlp)

pred <- predict(fit_mlp, X_train)
pred

Returned Objects

Output of sboa()

The sboa() function returns an object of class "sboa" containing:

  • par: best solution found
  • value: best objective function value
  • convergence: convergence curve over iterations
  • population: final population matrix
  • fitness: final fitness values of the population
  • call: matched function call

Output of sboa_mlp()

The sboa_mlp() function returns an object of class "sboa_mlp" containing:

  • par: optimized neural network parameters
  • value: best objective function value
  • convergence: convergence curve over iterations
  • input_dim: number of input variables
  • hidden_dim: number of hidden neurons
  • output_dim: number of output variables
  • x_min: minimum values used for input normalization
  • x_max: maximum values used for input normalization
  • y_min: minimum values used for output normalization
  • y_max: maximum values used for output normalization
  • fitted: fitted values on the original scale
  • metrics: training performance metrics
  • call: matched function call

Current Scope

The current version of the package supports:

  • continuous optimization problems
  • built-in benchmark functions for optimization experiments
  • single-hidden-layer MLP models
  • sigmoid activation in hidden and output layers
  • regression-oriented neural network training with mean squared error (MSE)

Future Extensions

Possible future improvements include:

  • engineering optimization examples
  • train/test evaluation helpers
  • classification support
  • alternative activation functions
  • multiple hidden layers
  • additional visualization utilities
  • automatic benchmark bound selection within sboa()

Authors

  • Burak Dilber
  • A. Fırat Özdemir

References


License

MIT License

About

❗ This is a read-only mirror of the CRAN R package repository. SBOAtools — Secretary Bird Optimization for Continuous Optimization and Neural Network Training. Homepage: https://github.com/burakdilber/SBOAtools Report bugs for this package: https://github.com/burakdilber/SBOAtools/issues

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages