Written by Julian Lanson
Posted on April 11, 2025
Hipcheck 3.13.0 is out! This is a lighter release as far as user-facing changes go, but internally we have been doing a lot of refactoring and development to prepare for exciting new changes that will be ready in upcoming releases.
If you missed the previous blog post, the Hipcheck team has released a Python SDK; Hipcheck users can now easily write their own data and analysis plugins for Hipcheck in Python!
investigate-if-fail
Hipcheck currently has two ways that it may determine a target needs
investigation. One is the investigate
policy expression, the other is the
investigate-if-fail
list; if any plugin in this list failed, the target is
marked as "investigate" even if the overall analysis passed. Until now, the
fact that investigate-if-fail
was the cause
of an "investigate" determination was not clear in the report. Now, the
"recommendation"
section of the report JSON has an additional field "reason"
.
When the investigate
policy expression is the cause for an "investigate"
recommendation, the field looks as such:
"reason": "Policy",
But when the cause was one or more entries in investigate-if-fail
, it looks
like this:
"reason": {
"FailedAnalyses": [
"mitre/affiliation"
]
},
The human-readable report format has also been updated:
Recommendation
INVESTIGATE the following investigate-if-fail plugins failed: mitre/affiliation
hc explain target-triple
SubcommandWe added an hc explain
subcommand to act as an umbrella for any commands that
we offer to help users (Overloading the hc help
command generated by the
clap
CLI parsing library crate was not feasible). The first command we've
added under hc explain
is target-triple
, which will print out the
architecture of the current platform as detected by Hipcheck, and the set of
other "known" and supported platforms. This can help users to debug issues
relating to plugin selection and startup in hc check
, whereas normally
information about the detected platform is not readily exposed to users.
We updated the entropy
plugin to record commit hashes above the configured
entropy threshold as concerns. This was the behavior of the entropy
analysis
prior to the introduction of the plugin system, but got lost somewhere along the
way; the functionality has been restored.
In the Hipcheck v3.12 release and associated plugin releases, we were
accidentally capturing plugin standard output stream (stdout
) along with
standard error (stderr
). Furthermore, the Rust SDK logging subscriber was
defaulting to emitting log entries on stdout
. In this release we have cleaned
things up; our plugin SDKs emit logging information on stderr
, and Hipcheck
core only captures plugin processes' stderr
stream.
Finally, we revised our policy file parsing logic to allow free-standing
analysis
nodes in the analyze
tree. Previously, all analysis
nodes had to
be the child of a category
node.
The following are notable updates to Hipcheck that do not (currently) impact user experience.
The team has done a lot of work preparing Hipcheck core to support analyzing
multiple targets in a single invocation as described in RFD11. Since
one of Hipcheck's primary use-cases is analyzing all the dependencies in a
user's codebase, we want to enable doing just that with a single command instead
of users manually running Hipcheck against one dependency after another. Not
only is this tedious, but Hipcheck needing to spin up, configure, and tear down
the analysis plugins each time is computationally wasteful. We are currently
preparing Hipcheck to support taking a project dependency specification or lock
file (such as go.mod
, package-lock.json
, Cargo.lock
, etc.) as input.
Hipcheck will then automatically derive all the repositories to analyze from
that file and generate reports for each.
The fundamental internal API change is that now a TargetSeed
(the struct
derived from the string passed to hc check
) produces an unbounded stream of
Target
objects instead of just one Target
. The existing "single-target" seed
types like packages, remote repos, and SBOMs will still only produce a stream
containing one Target
, but this change allows us to implement support for
"multi-target" seeds like the above lock files. We have already implemented
initial support for extracting a list of "single-target" seeds from go.mod
and package-lock.json
.
With this change, there are three locations where we can employ async
programming to parallelize Hipcheck operations for maximum efficiency. The first
is parallelizing the resolution of a "single-target" seeds to Target
s, which
often involves git clone
ing a remote repository. We are working on switching
from using git2
as our Git library to gix
, which both will remove the
openssl
dependency that slows our build times, and allow us to use async
repository cloning and manipulation functions. With this, we can work on
resolving multiple repositories from lock files simultaneously.
Secondly, we can parallelize the analysis step that effectively turns Target
objects into Report
s. We have completed initial implementation of this step;
the Session
object that drives analysis is now less stateful, and we can spin
up multiple to act as a pool of workers that read Target
s off of the stream as
they become available.
Finally, we can stream-ify the Report
s generated by these Session
objects
emit them as they are generated. Currently, Hipcheck only supports writing
reports to the shell, as part of the "multi-target" seed support we are looking
at supporting writing reports as individial JSON files to an output directory.
Although we have internally completed parsing of a few lock file types, this refactor is on-going we have not yet made analyzing them available through the CLI. We are looking forward to doing so in a future release.
cargo xtask benchmark
We've taken inspiration from Rust's performance website to track
changes to our own tool's performance over time, with the goal of reducing our
per-target resolution and analysis time in the long run. We plan to create a
webpage similar to Rust's to publicly display performance information, but for
now the first step was this release's implementation of a benchmark
subcommand
under our cargo xtask
tool. This automation runs Hipcheck against a
pre-defined set of analysis targets while collecting performance data, and
append the results to .csv
files in the target output directory. With the
basic structure in place, we can add more targets and more metrics going
forward.
We're always looking for new contributors! If you'd like to learn more about Hipcheck and get involved in contributing, please checkout our Roadmap and feel free to get in touch with us through our Discussions board!
As always, we want to say a big "Thank you!" to everyone who supports the project at MITRE, to CISA for sponsoring our current work on it, to our prior government sponsors who have helped advance Hipcheck, and to everyone who has contributed, given feedback, or encouraged us in building it.
The following team members contributed to this release: