Skip to content

Commit c352d77

Browse files
committed
render
1 parent 314de7a commit c352d77

1,216 files changed

Lines changed: 2377 additions & 218131 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/.meta.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
2-
"timestamp": "2025-11-14 15:40:54 CET",
2+
"timestamp": "2026-02-20 00:29:44 CET",
33
"package_version": "0.25.0",
4-
"register_commit": "94eb0c23395e568ad4f49ef9905b4153c0f74a91",
5-
"register_commit_short": "94eb0c2"
4+
"register_commit": "314de7a5ed0766e4015dd5031e84f3e53b4b3737",
5+
"register_commit_short": "314de7a"
66
}

docs/certs/2020-001/index.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414
<title>CODECHECK Certificate 2020-001</title>
1515

16-
<script src="../../libs/header-attrs-2.29/header-attrs.js"></script>
16+
<script src="../../libs/header-attrs-2.30/header-attrs.js"></script>
1717
<script src="../../libs/jquery-3.6.0/jquery-3.6.0.min.js"></script>
1818
<meta name="viewport" content="width=device-width, initial-scale=1" />
1919
<link href="../../libs/bootstrap-3.3.5/css/bootstrap.min.css" rel="stylesheet" />
@@ -77,7 +77,7 @@
7777
"name": "Stephen R Piccolo"
7878
}
7979
],
80-
"abstract": "<jats:title>Abstract<\/jats:title>\n <jats:sec>\n <jats:title>Background<\/jats:title>\n <jats:p>Classification algorithms assign observations to groups based on patterns in data. The machine-learning community have developed myriad classification algorithms, which are used in diverse life science research domains. Algorithm choice can affect classification accuracy dramatically, so it is crucial that researchers optimize the choice of which algorithm(s) to apply in a given research domain on the basis of empirical evidence. In benchmark studies, multiple algorithms are applied to multiple datasets, and the researcher examines overall trends. In addition, the researcher may evaluate multiple hyperparameter combinations for each algorithm and use feature selection to reduce data dimensionality. Although software implementations of classification algorithms are widely available, robust benchmark comparisons are difficult to perform when researchers wish to compare algorithms that span multiple software packages. Programming interfaces, data formats, and evaluation procedures differ across software packages; and dependency conflicts may arise during installation.<\/jats:p>\n <\/jats:sec>\n <jats:sec>\n <jats:title>Findings<\/jats:title>\n <jats:p>To address these challenges, we created ShinyLearner, an open-source project for integrating machine-learning packages into software containers. ShinyLearner provides a uniform interface for performing classification, irrespective of the library that implements each algorithm, thus facilitating benchmark comparisons. In addition, ShinyLearner enables researchers to optimize hyperparameters and select features via nested cross-validation; it tracks all nested operations and generates output files that make these steps transparent. ShinyLearner includes a Web interface to help users more easily construct the commands necessary to perform benchmark comparisons. ShinyLearner is freely available at https://github.com/srp33/ShinyLearner.<\/jats:p>\n <\/jats:sec>\n <jats:sec>\n <jats:title>Conclusions<\/jats:title>\n <jats:p>This software is a resource to researchers who wish to benchmark multiple classification or feature-selection algorithms on a given dataset. We hope it will serve as example of combining the benefits of software containerization with a user-friendly approach.<\/jats:p>\n <\/jats:sec>",
80+
"abstract": "<jats:title>Abstract<\/jats:title>\n <jats:sec>\n <jats:title>Background<\/jats:title>\n <jats:p>Classification algorithms assign observations to groups based on patterns in data. The machine-learning community have developed myriad classification algorithms, which are used in diverse life science research domains. Algorithm choice can affect classification accuracy dramatically, so it is crucial that researchers optimize the choice of which algorithm(s) to apply in a given research domain on the basis of empirical evidence. In benchmark studies, multiple algorithms are applied to multiple datasets, and the researcher examines overall trends. In addition, the researcher may evaluate multiple hyperparameter combinations for each algorithm and use feature selection to reduce data dimensionality. Although software implementations of classification algorithms are widely available, robust benchmark comparisons are difficult to perform when researchers wish to compare algorithms that span multiple software packages. Programming interfaces, data formats, and evaluation procedures differ across software packages; and dependency conflicts may arise during installation.<\/jats:p>\n <\/jats:sec>\n <jats:sec>\n <jats:title>Findings<\/jats:title>\n <jats:p>To address these challenges, we created ShinyLearner, an open-source project for integrating machine-learning packages into software containers. ShinyLearner provides a uniform interface for performing classification, irrespective of the library that implements each algorithm, thus facilitating benchmark comparisons. In addition, ShinyLearner enables researchers to optimize hyperparameters and select features via nested cross-validation; it tracks all nested operations and generates output files that make these steps transparent. ShinyLearner includes a Web interface to help users more easily construct the commands necessary to perform benchmark comparisons. ShinyLearner is freely available at https://github.com/srp33/ShinyLearner.<\/jats:p>\n <\/jats:sec>\n <jats:sec>\n <jats:title>Conclusions<\/jats:title>\n <jats:p>This software is a resource to researchers who wish to benchmark multiple classification or feature-selection algorithms on a given dataset. We hope it will serve as example of combining the benefits of software containerization with a user-friendly approach.<\/jats:p>\n <\/jats:sec>",
8181
"url": "https://doi.org/10.1093/gigascience/giaa026",
8282
"sameAs": "https://doi.org/10.1093/gigascience/giaa026"
8383
},

docs/certs/2020-001/index.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
],
2222
"reference": "https://doi.org/10.1093/gigascience/giaa026",
2323
"abstract": {
24-
"text": "<jats:title>Abstract<\/jats:title>\n <jats:sec>\n <jats:title>Background<\/jats:title>\n <jats:p>Classification algorithms assign observations to groups based on patterns in data. The machine-learning community have developed myriad classification algorithms, which are used in diverse life science research domains. Algorithm choice can affect classification accuracy dramatically, so it is crucial that researchers optimize the choice of which algorithm(s) to apply in a given research domain on the basis of empirical evidence. In benchmark studies, multiple algorithms are applied to multiple datasets, and the researcher examines overall trends. In addition, the researcher may evaluate multiple hyperparameter combinations for each algorithm and use feature selection to reduce data dimensionality. Although software implementations of classification algorithms are widely available, robust benchmark comparisons are difficult to perform when researchers wish to compare algorithms that span multiple software packages. Programming interfaces, data formats, and evaluation procedures differ across software packages; and dependency conflicts may arise during installation.<\/jats:p>\n <\/jats:sec>\n <jats:sec>\n <jats:title>Findings<\/jats:title>\n <jats:p>To address these challenges, we created ShinyLearner, an open-source project for integrating machine-learning packages into software containers. ShinyLearner provides a uniform interface for performing classification, irrespective of the library that implements each algorithm, thus facilitating benchmark comparisons. In addition, ShinyLearner enables researchers to optimize hyperparameters and select features via nested cross-validation; it tracks all nested operations and generates output files that make these steps transparent. ShinyLearner includes a Web interface to help users more easily construct the commands necessary to perform benchmark comparisons. ShinyLearner is freely available at https://github.com/srp33/ShinyLearner.<\/jats:p>\n <\/jats:sec>\n <jats:sec>\n <jats:title>Conclusions<\/jats:title>\n <jats:p>This software is a resource to researchers who wish to benchmark multiple classification or feature-selection algorithms on a given dataset. We hope it will serve as example of combining the benefits of software containerization with a user-friendly approach.<\/jats:p>\n <\/jats:sec>",
24+
"text": "<jats:title>Abstract<\/jats:title>\n <jats:sec>\n <jats:title>Background<\/jats:title>\n <jats:p>Classification algorithms assign observations to groups based on patterns in data. The machine-learning community have developed myriad classification algorithms, which are used in diverse life science research domains. Algorithm choice can affect classification accuracy dramatically, so it is crucial that researchers optimize the choice of which algorithm(s) to apply in a given research domain on the basis of empirical evidence. In benchmark studies, multiple algorithms are applied to multiple datasets, and the researcher examines overall trends. In addition, the researcher may evaluate multiple hyperparameter combinations for each algorithm and use feature selection to reduce data dimensionality. Although software implementations of classification algorithms are widely available, robust benchmark comparisons are difficult to perform when researchers wish to compare algorithms that span multiple software packages. Programming interfaces, data formats, and evaluation procedures differ across software packages; and dependency conflicts may arise during installation.<\/jats:p>\n <\/jats:sec>\n <jats:sec>\n <jats:title>Findings<\/jats:title>\n <jats:p>To address these challenges, we created ShinyLearner, an open-source project for integrating machine-learning packages into software containers. ShinyLearner provides a uniform interface for performing classification, irrespective of the library that implements each algorithm, thus facilitating benchmark comparisons. In addition, ShinyLearner enables researchers to optimize hyperparameters and select features via nested cross-validation; it tracks all nested operations and generates output files that make these steps transparent. ShinyLearner includes a Web interface to help users more easily construct the commands necessary to perform benchmark comparisons. ShinyLearner is freely available at https://github.com/srp33/ShinyLearner.<\/jats:p>\n <\/jats:sec>\n <jats:sec>\n <jats:title>Conclusions<\/jats:title>\n <jats:p>This software is a resource to researchers who wish to benchmark multiple classification or feature-selection algorithms on a given dataset. We hope it will serve as example of combining the benefits of software containerization with a user-friendly approach.<\/jats:p>\n <\/jats:sec>",
2525
"source": "CrossRef"
2626
}
2727
},

docs/certs/2020-002/index.html

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414
<title>CODECHECK Certificate 2020-002</title>
1515

16-
<script src="../../libs/header-attrs-2.29/header-attrs.js"></script>
16+
<script src="../../libs/header-attrs-2.30/header-attrs.js"></script>
1717
<script src="../../libs/jquery-3.6.0/jquery-3.6.0.min.js"></script>
1818
<meta name="viewport" content="width=device-width, initial-scale=1" />
1919
<link href="../../libs/bootstrap-3.3.5/css/bootstrap.min.css" rel="stylesheet" />
@@ -78,6 +78,7 @@
7878
"name": "Leslie S. Smith"
7979
}
8080
],
81+
"abstract": "A neural net was used to analyse samples of natural images and text. For the natural images, components resemble derivatives of Gaussian operators, similar to those found in visual cortex and inferred from psychophysics. While the results from natural images do not depend on scale, those from text images are highly scale dependent. Convolution of one of the text components with an original image shows that it is sensitive to inter-word gaps.",
8182
"url": "https://doi.org/10.1088/0954-898X_3_1_008",
8283
"sameAs": "https://doi.org/10.1088/0954-898X_3_1_008"
8384
},
@@ -276,10 +277,11 @@ <h2 class="card-title cert-card-title">
276277
<!-- Abstract section -->
277278
<div id="abstract-section" class="text-container">
278279
<p>
279-
<strong>Abstract</strong>: <i>Obtained from <span class="math inline"><em>a</em><em>b</em><em>s</em><em>t</em><em>r</em><em>a</em><em>c</em><em>t</em><sub><em>s</em></sub><em>o</em><em>u</em><em>r</em><em>c</em><em>e</em></span></i>
280+
<strong>Abstract</strong>: <i>Obtained from <a href="https://openalex.org">OpenAlex</a></i>
280281
</p>
281282
<div id="abstract-content" class="text-box">
282283
<p>
284+
A neural net was used to analyse samples of natural images and text. For the natural images, components resemble derivatives of Gaussian operators, similar to those found in visual cortex and inferred from psychophysics. While the results from natural images do not depend on scale, those from text images are highly scale dependent. Convolution of one of the text components with an original image shows that it is sensitive to inter-word gaps.
283285
</p>
284286
</div>
285287
</div>

docs/certs/2020-002/index.json

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,11 @@
1616
"name": "Leslie S. Smith"
1717
}
1818
],
19-
"reference": "https://doi.org/10.1088/0954-898X_3_1_008"
19+
"reference": "https://doi.org/10.1088/0954-898X_3_1_008",
20+
"abstract": {
21+
"text": "A neural net was used to analyse samples of natural images and text. For the natural images, components resemble derivatives of Gaussian operators, similar to those found in visual cortex and inferred from psychophysics. While the results from natural images do not depend on scale, those from text images are highly scale dependent. Convolution of one of the text components with an original image shows that it is sensitive to inter-word gaps.",
22+
"source": "OpenAlex"
23+
}
2024
},
2125
"codecheck": {
2226
"codecheckers": [

docs/certs/2020-003/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414
<title>CODECHECK Certificate 2020-003</title>
1515

16-
<script src="../../libs/header-attrs-2.29/header-attrs.js"></script>
16+
<script src="../../libs/header-attrs-2.30/header-attrs.js"></script>
1717
<script src="../../libs/jquery-3.6.0/jquery-3.6.0.min.js"></script>
1818
<meta name="viewport" content="width=device-width, initial-scale=1" />
1919
<link href="../../libs/bootstrap-3.3.5/css/bootstrap.min.css" rel="stylesheet" />

docs/certs/2020-004/index.html

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414
<title>CODECHECK Certificate 2020-004</title>
1515

16-
<script src="../../libs/header-attrs-2.29/header-attrs.js"></script>
16+
<script src="../../libs/header-attrs-2.30/header-attrs.js"></script>
1717
<script src="../../libs/jquery-3.6.0/jquery-3.6.0.min.js"></script>
1818
<meta name="viewport" content="width=device-width, initial-scale=1" />
1919
<link href="../../libs/bootstrap-3.3.5/css/bootstrap.min.css" rel="stylesheet" />
@@ -73,6 +73,7 @@
7373
"name": "C. W. Anderson"
7474
}
7575
],
76+
"abstract": "It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem. The task is to balance a pole that is hinged to a movable cart by applying forces to the cart's base. It is argued that the learning problems faced by adaptive elements that are components of adaptive networks are at least as difficult as this version of the pole-balancing problem. The learning system consists of a single associative search element (ASE) and a single adaptive critic element (ACE). In the course of learning to balance the pole, the ASE constructs associations between input and output by searching under the influence of reinforcement feedback, and the ACE constructs a more informative evaluation function than reinforcement feedback alone can provide. The differences between this approach and other attempts to solve problems using neurolike elements are discussed, as is the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences.",
7677
"url": "https://doi.org/10.1109/TSMC.1983.6313077",
7778
"sameAs": "https://doi.org/10.1109/TSMC.1983.6313077"
7879
},
@@ -271,10 +272,11 @@ <h2 class="card-title cert-card-title">
271272
<!-- Abstract section -->
272273
<div id="abstract-section" class="text-container">
273274
<p>
274-
<strong>Abstract</strong>: <i>Obtained from <span class="math inline"><em>a</em><em>b</em><em>s</em><em>t</em><em>r</em><em>a</em><em>c</em><em>t</em><sub><em>s</em></sub><em>o</em><em>u</em><em>r</em><em>c</em><em>e</em></span></i>
275+
<strong>Abstract</strong>: <i>Obtained from <a href="https://openalex.org">OpenAlex</a></i>
275276
</p>
276277
<div id="abstract-content" class="text-box">
277278
<p>
279+
It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem. The task is to balance a pole that is hinged to a movable cart by applying forces to the cart’s base. It is argued that the learning problems faced by adaptive elements that are components of adaptive networks are at least as difficult as this version of the pole-balancing problem. The learning system consists of a single associative search element (ASE) and a single adaptive critic element (ACE). In the course of learning to balance the pole, the ASE constructs associations between input and output by searching under the influence of reinforcement feedback, and the ACE constructs a more informative evaluation function than reinforcement feedback alone can provide. The differences between this approach and other attempts to solve problems using neurolike elements are discussed, as is the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences.
278280
</p>
279281
</div>
280282
</div>

0 commit comments

Comments
 (0)