Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bolt build is very slow #2737

Open
nex3 opened this issue Aug 29, 2019 · 32 comments
Open

Bolt build is very slow #2737

nex3 opened this issue Aug 29, 2019 · 32 comments
Labels
enhancement New feature or request

Comments

@nex3
Copy link
Contributor

nex3 commented Aug 29, 2019

I'm opening this as a meta-issue to track the holistic problem that the Bolt design system is seeing with very slow build times. I'd like to provide tools to help make it faster, and in doing so hopefully improve the performance for all users with similar use cases.

Analysis

At @sghoweri's suggestion, I've been testing performance on the test/sass-compile-test branch, with the following results:

  • LibSass with a monolithic entrypoint file: about 40s for initial compilation and rebuilds, no matter what file was changed.
  • LibSass with many different entrypoints combined via Webpack: about 17s for initial compilation, 17s for rebuilds when @bolt/core/styles/index.scss is modified, and 1s for rebuilds when an individual component is modified.
  • Dart Sass with a monolithic entrypoint file: about 47s for initial compilation and rebuilds, no matter what file was changed.
  • Dart Sass with many different entrypoints combined via Webpack: about 47s for initial compilation, 47s for rebuilds when @bolt/core/styles/index.scss is modified, and 1s for rebuilds when an individual component is modified.

Note: when compiling with Dart Sass, I'm using my own branch as well as a local version of Dart Sass with a fix for sass/dart-sass#811. I'm compiling with Fibers enabled to trigger the much-faster synchronous code path.

It's not surprising that Dart Sass is slower than LibSass for monolithic compilations, since pure JS is always going to be somewhat slower than C++, but it is surprising that LibSass benefits from multiple entrypoints while Dart Sass does not. @mgreter or @xzyfer, do you have any insight into why that could be? Is LibSass doing some sort of caching across compilations, or is it able to run multiple compilations in parallel?

I then attached a profiler to the Dart Sass compilation to see if I could determine where it's spending all that time. It looks like by far the biggest culprit—about 40% of the total compilation time—is spent resolving @imports. Most of this is spent waiting for filesystem calls to determine exactly which files exist. The remaining time is spent doing mostly bread-and-butter interpreter stuff, with a slight emphasis on built-in map manipulation functions.

Command-Line Compilation

As an experiment, I also set up a version of the repo where the monolithic entrypoint can be compiled from the command-line. Compiling this with the native-code Dart Sass (using sass -I node_modules docs-site/sass-compile-test.scss > /dev/null) takes about 11s, although of course it has no caching across compilations so incremental compilations would be much more expensive.

Interestingly, SassC takes about 19s for the same compilation, which is also much faster than the monolithic compilation when driven via Webpack. It's not clear to me what's causing this major discrepancy... the command-line run comments out the export-data() function, but commenting it out in the Webpack run doesn't substantially increase its performance. It's possible that some of it is just performance improvements to LibSass itself between the version available through Node Sass (3.5.5) and the version I was testing with (3.6.1-9-gc713).

When profiling the Dart VM compilation, it looks like it's spending vastly less time (about 4.5% of the total compilation time) checking the filesystem. I think this is because Dart Sass's import semantics, especially in the presence of importers, are subtly different from the JavaScript API's in a way that allows it to cache the vast majority of lookups.

Possible Solutions

Note: any solution we come up with should avoid substantially regressing the single-component-recompilation case.

Embedded Dart Sass

This is likely to be by far the easiest solution. Dart Sass is currently easiest to use from JS as a pure-JS package, but as mentioned above JS as a language imposes a considerable amount of overhead. We're planning on launching an embedded mode that will run the Dart VM as a subprocess (sass/dart-sass#248), which should substantially improve performance relative to the pure JS version. It's hard to say exactly how much benefit this would provide (especially because it depends on which precise importer and caching semantics we decide on), but my guess is it would at least make Dart Sass's performance competitive with LibSass's.

Better Caching Semantics

As I mentioned earlier, Dart Sass running in JS library mode doesn't cache its import resolution within a single compilation. This is necessary to maintain strict compatibility with Node Sass, but it doesn't have to be locked in place forever. As part of #2509, we should look into defining a new set of semantics (like those in native Dart Sass) that are more amenable to caching.

Module System

One of the features of the new module system is ensuring that a given file is only loaded once. How much this will help depends on how much the current setup is importing the same files multiple times, though.

Cross-Compilation Caching

The current compilation setup compiles many different entrypoints and then uses Webpack to combine them. This has the benefit of allowing Webpack to avoid unnecessary recompilation when an individual component is modified, but it currently means that Sass (or at least Dart Sass) doesn't share any state across compilations of each separate entrypoint.

In general, it's not safe for Sass to assume that separate compilations have anything in common—the entire filesystem could have changed between two calls to render(). But when Webpack kicks off a batch of compilations, it's aware that they're all expected to work the same filesystem state. Sass could provide some API—perhaps a Compiler object—that makes the assumption that nothing changes across multiple compilations, so it can share cached import resolutions between them.

We could even go a step further and provide the ability for the Compiler to be informed when changes do happen, so that the cache can be invalidated only as much as necessary. Dart Sass already has support for this internally for --watch mode; we'd just need to provide an API for it. I'm not sure if Webpack exposes this information, though—maybe @evilebottnawi can provide insight here.

Loaded Module Caching

This is the furthest-reaching possibility, but also one that could get monolithic compilation to within the speed of file-by-file compilation for a single modified component. The module system defines a clear notion of the loaded state of a module, and we could cache this state across compilations and avoid even evaluating a module again once it's loaded.

The major complexity here is that loading a module can have side effects, including changing the state of another loaded module. We'd need to have some way of marking modules—as well as anything downstream from them—as uncachable when this happens. But uncachable modules are likely to be a small minority, so this should still provide considerable benefits.

@nex3 nex3 added the enhancement New feature or request label Aug 29, 2019
@mgreter
Copy link

mgreter commented Aug 29, 2019

LibSass has no parallelize feature, but I think webpack can invoke multiple compilers on different threads. I think to remember that we had an issue with webpack in the past where we he had one static variable that was shared between threads, so this lead me to the conclusion that wepback indeed does some parallizm. I've made a POC in the past to parallelize parsing of import in libsass: sass/libsass#2544. Would be interesting to see how much the given case would profit from it. Evaluation seems impossible to parallelize, given the dependencies on eg. variable assignments. I will try to do a profiling run (MSVC) on that code base to see where we spend most time in libsass.

@mgreter
Copy link

mgreter commented Aug 29, 2019

@nex3 can you give an exact cmd-line for libsass/sassc to reproduce this (what to checkout, how is sassc invoked)?

@nex3
Copy link
Contributor Author

nex3 commented Aug 29, 2019

Check out this branch, run yarn install, and then run yarn run start (you may also need to edit the browser-sync source files to start on port 3001 rather than port 3000). LibSass is invoked through Webpack's sass-loader.

@mgreter
Copy link

mgreter commented Aug 29, 2019

OK, so it's also behind node-sass? Did you do a pure run with minimal sassc wrapper?
Btw. damn no yarn installed yet, any way to do this with on-board npm?

@nex3
Copy link
Contributor Author

nex3 commented Aug 29, 2019

I did (see Command-Line Compilation for instructions) but that's not really a good representation of how this is being compiled in practice.

@mgreter
Copy link

mgreter commented Aug 29, 2019

It would still be nice to know if the overhead is coming from node or not ;)

@mgreter
Copy link

mgreter commented Aug 29, 2019

Specially since I have no clue how to do profiling on c code when libsass is invoked via node ...

@mgreter
Copy link

mgreter commented Aug 29, 2019

I did a npm install -g yarn and that's what I get currently:

D:\github-sass\perl-libsass\bolt>cd docs-site && yarn run start
yarn run v1.17.3
$ bolt start
'bolt' is not recognized as an internal or external command,
operable program or batch file.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

Anything obvious that I am missing?

@mgreter
Copy link

mgreter commented Aug 29, 2019

Are you sure this works on windows, or do I need to get into my linux server or vm?

@nex3
Copy link
Contributor Author

nex3 commented Aug 29, 2019

You have to run yarn install too, I think.

@mgreter
Copy link

mgreter commented Aug 29, 2019

The results from linux:

git clone https://github.com/bolt-design-system/bolt --branch test/sass-compile-test
cd bolt
npm install yarn -g
yarn install
yarn run start
yarn run v1.17.3
$ cd docs-site && yarn run start
$ bolt start
/bin/sh: bolt: command not found

Still missing something!??

@nex3
Copy link
Contributor Author

nex3 commented Aug 29, 2019

@sghowery might be able to provide more insight...

@mgreter
Copy link

mgreter commented Aug 29, 2019

One more distinct error I got from running npm run setup is

@bolt/[email protected]: The engine "node" is incompatible with this module. Expected version ">=10.0.0". Got "8.12.0"

@mgreter
Copy link

mgreter commented Aug 29, 2019

I don't seem to be able to get this running, updated my node (win) to latest LTE version (v10.16.3), but I still get various seemingly non-related errors, whatever commands I try that are given above. Maybe my system is borked by having to many dev-related stuff installed (multiple MSVC/gcc/clang version and sdks). Giving up for today, maybe trying again next week. E.g.

bolt>yarn install
internal/modules/cjs/loader.js:638
    throw err;
    ^

Error: Cannot find module 'C:\Users\mgreter\AppData\Roaming\npm\node_modules\yarn-cli\yarn.js'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
    at Function.Module._load (internal/modules/cjs/loader.js:562:25)
    at Function.Module.runMain (internal/modules/cjs/loader.js:831:12)
    at startup (internal/bootstrap/node.js:283:19)
    at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)

@sghoweri
Copy link

Are you sure this works on windows, or do I need to get into my linux server or vm?

@mgreter let me double check the Windows setup process on this (sanity check).

I know the setup process for Linux and MacOS is buttoned up -- I'm just going to confirm the quick setup process on Windows and I'll let you know!

@sghoweri
Copy link

git clone https://github.com/bolt-design-system/bolt --branch test/sass-compile-test
cd bolt
npm install yarn -g
yarn install
yarn run start
yarn run v1.17.3
$ cd docs-site && yarn run start
$ bolt start
/bin/sh: bolt: command not found

@mgreter most times I've seen the > /bin/sh: bolt: command not found error, it's been due to node_modules not finishing the install process for some reason (ex. canceled out, network issue, etc).

My recommended next step to try here would be to to remove your local node_modules folder in the root of the repo, re-run yarn to reinstall everything, then finally run yarn run start (still at the root of the repo) to boot up the local dev server.

@sghoweri
Copy link

One more distinct error I got from running npm run setup is

@bolt/[email protected]: The engine "node" is incompatible with this module. Expected version ">=10.0.0". Got "8.12.0"

@nex3 😬 I'm going to see if I can disable some stuff in our default build and setup tasks to help slim down this test environment a bit (I'll try this out on a branch separate from the current test/sass-compile-test -- I'll keep you posted!)

@sghoweri
Copy link

sghoweri commented Aug 30, 2019

@nex3 @mgreter OK! I've nixed + disabled quite a lot of the non-mission critical stuff on this new https://github.com/bolt-design-system/bolt/tree/test/sass-isolated-compile-test branch to hopefully help you guys with the setup process!

All you should need to do on this branch (assuming you've already cloned the repo and checked out the test/sass-isolated-compile-test branch) is run yarn to install, then just run one of these 4 build / serve commands below:

Sass Compiling w/ An Array of Files Passed to Webpack (Slow)

# Run Webpack Dev Server (~25.806s for first compile)
yarn run start

# Production Build
yarn run build

Sass Compiling via 1 Sass File w/ Imports Passed to Webpack (Very Slow)

#Run Webpack Dev Server
yarn run start:sass-import-test

# Production Build
yarn run build:sass-import-test

Hope this helps!!

@mgreter
Copy link

mgreter commented Aug 30, 2019

OK, now I get a distinct error from node-sass install:

Downloading binary from https://github.com/sass/node-sass/releases/download/v3.13.1/win32-x64-64_binding.node
Cannot download "https://github.com/sass/node-sass/releases/download/v3.13.1/win32-x64-64_binding.node":

That file seems indeed not available, not sure why it tries to download that old version.
Edit: the following call to node-gyp compile might be failing due to multiple MSVC instances installed.
Edit: Should have noted that this happens when calling npm install yarn-cli.

@nschonni
Copy link
Contributor

3.13.1 is pretty old, and doesn't support Node 10 (Module 64). I didn't see that version in the lockfile so I'm not sure where it's coming from

@alexander-akait
Copy link

As I mentioned earlier, Dart Sass running in JS library mode doesn't cache its import resolution within a single compilation. This is necessary to maintain strict compatibility with Node Sass, but it doesn't have to be locked in place forever. As part of #2509, we should look into defining a new set of semantics (like those in native Dart Sass) that are more amenable to caching.

No a bad idea, I tried creating caching in webpackImproter, instead provide resolved url, i provide content of resolved file, it is decrease build time around 2-5x

@alexander-akait
Copy link

When you use webpack, please use cache-loader and setup using

Sass Compiling w/ An Array of Files Passed to Webpack

It is decrease built time very well, in webpack@5 caching on loader will be by default so you don't need cache-loader, anyway sass should caching each @import

@xzyfer
Copy link

xzyfer commented Aug 30, 2019 via email

@mgreter
Copy link

mgreter commented Aug 31, 2019

After a lot of struggle to get anything to run, I finally got some numbers out, although only by invoking sassc directly by mangling the includes and other code a bit. None the less it seems to give some valid pointers. First I wanted to mention that yarn doesn't seem to like to be installed via npm, once I installed it through their installer, it started working on linux. But I never got bolt to work on windows.

I tested this against current master, where the monolithic file takes around 20s on my machine (Core i9-9900K), but I mainly analyzed the results against my current backporting/refactoring branch, which takes around 8.5s on my Machine. This branch is much closer to dart-sass implementation. Master does the eval stage in multiple steps, while the new branch does everything in one.

From the profile data I can see that around 40% is spent on memory allocation/deallocation. This seems reasonable given the immutable nature of sass values and as we didn't yet optimize memory allocations with e.g. free-lists etc. E.g. 5% is spent in the map-merge function, copying the first map argument (re-creating the hash structure, calling key-comparison and hash-functions). All the run-time is spent in the evaluation stage, parsing is barely noticeable (maybe 5%). So the hotest code-path for bolt seems to be (with 54%) in _evaluateArguments, 40% for performing the evaluation on the args, and 10% creating the ArgumentResults (again, mostly from copying the map and the vector).

image
Top two external calls are to malloc and free.

image

So it seems the reason why it is so slow is the fact the we create, copy and delete a lot of temporary objects given the nature of immutable sass values and the "fact" that there is no map-set or list-set. I hope that at some point in LibSass we can use a custom allocators to reuse memory more intelligently. But putting such a thing in place is a pretty complex task.
https://www.youtube.com/watch?v=LIb3L4vKZ7U

@mgreter
Copy link

mgreter commented Sep 2, 2019

P.s. with a global memory pool and a free-list on top, plus another few tweaks here and there I get the runtime down to 3.5 seconds. Doubling the speed with the price of doubling the memory usage. Memory went from around 50MB to around 100MB to compile bolt on x64. It seems that this is close to where we can get in libsass (maybe 3 seconds is possible). Edit: And indeed it is:
image

A few more numbers from psass 0.5.0 (perl sass/scss compiler):
$ psass ..\bolt\sass-import-test.scss -I ..\bolt\node_modules out.css --benchmark

libsass: 3.6.0-dirty

24 wallclock secs (24.5470 usr + 0.0940 sys = 24.6410 CPU)

libsass: 3.6.1-42-g8e22cd41

20 wallclock secs (20.2340 usr + 0.0310 sys = 20.2650 CPU)

libsass: 3.6.1-61-g61ce8975-dirty (current refactoring branch)

3 wallclock secs (3.0790 usr + 0.2970 sys = 3.3760 CPU)

Edit: Looks like I underestimated how fast it can be :)

image

And I haven't even tackled any of the more complex possibilities. One that looks very promising is to move the variable lookup from the evaluation to the parser for constant O(1) access time (needs to be verified if possible, but I don't see why it shouldn't), which could shove off another 30% or something.

@mgreter
Copy link

mgreter commented Sep 8, 2019

Sorry for hijacking this issue a bit to document what I did so far in libsass to optimize this. Had some more time over the weekend to checkout variable runtime stack optimizations.

Numbers after libsass static variable access optimizations:

sassc (MSVC): 1.64s
psass (mingw): 2.07s
dart-sass: 8.90s

Don't want to go into the full details, but it is somewhat similar to how functions stacks work in C compilers. Basically for every scope we already know at parse time what variables that scope can hold (called stack variables in C). Now on runtime if a function is called, the memory needed for all stack variables is pushed to the global stack and the start address of that frame is registered as the current frame of reference. Each stack variable already knows it static offset on parse time, so on runtime access it knows it's address will be "current frame start address" + "static offset".

One main obstacle I've hit is with e.g. the while loop:

$i: 1;
.foo {
  @while $i != 5 {
    a: $i;
    $i: $i + 1;
  }
  bleed: $i;
}

The problem is the variable $i does not bleed out to the outer scope, although it is accessed in the condition. So for the first run the $i in the condition points to the outer scope and in all consecutive runs to the variable from the inner scope. So this case seems not possible to statically optimize. In fact in all other languages I checked the variable $i has to bleed out in order for the condition to work. For now I just revert to dynamically lookup the vars from the active stack in these code paths. As a benefit of the new approach we only create the lookup maps once per stack frame and re-use them on runtime.

To benchmark this easier I made a standalone package with sassc included: bolt-bench.zip

image

There is still some room left. The @content rule scoping still hasn't been moved to the new lookup, as I wasn't yet able to wrap my head around the scoping rules, so the old dynamic env is still created on runtime. plus a few unnecessary argument copies here and there.

@mgreter
Copy link

mgreter commented Sep 9, 2019

Another update; had quite some fun with the MSVC profiler. Got bolt runtime even further down 🚀
Also used a trick to get a "warm cache" by training the MSVC compiler with the bolt bench run,
Not sure if this is exactly fair as I don't really know how or which dart sass I execute for comparison.
None the less, the resulting executable produces the same output just in less time 😄
It seems to give around 20% of free performance on MSVC, biased towards bolt use case.
We might need to see how we can use this for release binaries, gcc should support this too.

image

dart-sass: 8.90s
psass (mingw): 1.21s
sassc (MSVC): 1.08s
sassc (trained): 0.87s

Overall at least a 20x fold improvement over current libsass master, and up to 10x faster than dart-sass, as far as I can measure it. And yes, there are still a few edges left to optimize, but it now boils down to micro bench-marking. Anyway I think this is already pretty impressive 🐢 🐇 .

@sghoweri
Copy link

WOW!

I'm super excited to start kicking the tires on these optimizations in Bolt — is there anything I can do to help / try pulling in to test?

@mgreter
Copy link

mgreter commented Oct 3, 2019

Got my refactoring branch running with node-sass today.
Here are more numbers from Intel(R) Atom(TM) CPU C2750 @ 2.40GHz.

sassc monolithic build

3.6.1: 143s
master: 107s
refactor: 8.3s

yarn run build

default node-sass: ~90s (83s)
real    1m26.796s
user    3m4.504s
sys     0m3.305s

node-sass refactor: ~49s (43s)
real    0m49.449s
user    1m8.014s
sys     0m2.415s

The difference of real vs user between the two versions seems to indicate quite a bit of overhead either in io or thread contention coming probably from webpack?

p.s. in order to compile node-sass with latest refactoring branch, libsass.gyp needs to be adjusted:

node-sass/src/libsass.gyp
    'sources': [
        'libsass/src/cencode.c',
        'libsass/src/ast.cpp',
        'libsass/src/ast_css.cpp',
        'libsass/src/ast_values.cpp',
        'libsass/src/ast_supports.cpp',
        'libsass/src/ast_sel_cmp.cpp',
        'libsass/src/ast_sel_unify.cpp',
        'libsass/src/ast_sel_super.cpp',
        'libsass/src/ast_sel_weave.cpp',
        'libsass/src/ast_selectors.cpp',
        'libsass/src/allocator.cpp',
        'libsass/src/context.cpp',
        'libsass/src/constants.cpp',
        'libsass/src/fn_utils.cpp',
        'libsass/src/fn_maps.cpp',
        'libsass/src/fn_lists.cpp',
        'libsass/src/fn_colors.cpp',
        'libsass/src/fn_numbers.cpp',
        'libsass/src/fn_strings.cpp',
        'libsass/src/fn_selectors.cpp',
        'libsass/src/fn_meta.cpp',
        'libsass/src/color_maps.cpp',
        'libsass/src/environment.cpp',
        'libsass/src/ast_fwd_decl.cpp',
        'libsass/src/file.cpp',
        'libsass/src/util.cpp',
        'libsass/src/util_string.cpp',
        'libsass/src/logger.cpp',
        'libsass/src/json.cpp',
        'libsass/src/units.cpp',
        'libsass/src/values.cpp',
        'libsass/src/plugins.cpp',
        'libsass/src/position.cpp',
        'libsass/src/offset.cpp',
        'libsass/src/serialize.cpp',
        'libsass/src/eval.cpp',
        'libsass/src/evaluate.cpp',
        'libsass/src/listize.cpp',
        'libsass/src/randomize.cpp',
        'libsass/src/cssize.cpp',
        'libsass/src/extender.cpp',
        'libsass/src/extension.cpp',
        'libsass/src/stylesheet.cpp',
        'libsass/src/interpolation.cpp',
        'libsass/src/parser.cpp',
        'libsass/src/parser_css.cpp',
        'libsass/src/parser_base.cpp',
        'libsass/src/parser_scss.cpp',
        'libsass/src/parser_sass.cpp',
        'libsass/src/parser_selector.cpp',
        'libsass/src/parser_stylesheet.cpp',
        'libsass/src/parser_expression.cpp',
        'libsass/src/parser_media_query.cpp',
        'libsass/src/parser_at_root_query.cpp',
        'libsass/src/parser_keyframe_selector.cpp',
        'libsass/src/source.cpp',
        'libsass/src/output.cpp',
        'libsass/src/inspect.cpp',
        'libsass/src/emitter.cpp',
        'libsass/src/scanner_span.cpp',
        'libsass/src/scanner_line.cpp',
        'libsass/src/scanner_string.cpp',
        'libsass/src/remove_placeholders.cpp',
        'libsass/src/sass.cpp',
        'libsass/src/sass_values.cpp',
        'libsass/src/sass_context.cpp',
        'libsass/src/sass_functions.cpp',
        'libsass/src/backtrace.cpp',
        'libsass/src/operators.cpp',
        'libsass/src/ast2c.cpp',
        'libsass/src/c2ast.cpp',
        'libsass/src/var_stack.cpp',
        'libsass/src/source_map.cpp',
        'libsass/src/source_state.cpp',
        'libsass/src/source_span.cpp',
        'libsass/src/error_handling.cpp',
        'libsass/src/MurmurHash2.cpp',
        'libsass/src/memory/SharedPtr.cpp',
        'libsass/src/memory_pool.cpp',
        'libsass/src/LUrlParser/LUrlParser.cpp',
        'libsass/src/utf8_string.cpp',
        'libsass/src/base64vlq.cpp'
    ]
   

These can always be gathered from libsass Makefile.conf.

@mgreter
Copy link

mgreter commented Apr 26, 2020

Sorry for the long silence, just got around to continue this work due to the ongoing worldwide situation. Haven't much to share beside to report that I brought down the compilation time by another 30% or so. Currently bolt-bench is down to the following numbers:

sassc (MSVC): 0.82s
sassc (trained): 0.68s

Find the standalone windows sassc executable which I used attached: sassc.zip
Note that this version is partly incomplete and leaks memory, but should pass expected specs.
It's more or less finished functionality wise for libsass 4.0, but still a lot of details missing.

@mgreter
Copy link

mgreter commented Apr 27, 2020

Here a powershell command to roughly estimate bolt-bench runtime with sassc from above:

Measure-Command {.\sassc.exe  -I bolt-bench bolt-bench\bolt-bench.scss output.css} | Select-Object -Property TotalSeconds

Feel free to add your own numbers if you happen to test it out. Here are some more numbers from my work laptop (core I7 6820HQ) also in comparison to older sassc I bundled with libsass installer (note that those are compiled with mingw which results in a bit slower executable than with MSVC):

3.6.0 - ~58s
3.6.3 - ~43s
WIP - ~1.5s

@sghoweri
Copy link

Here a powershell command to roughly estimate bolt-bench runtime with sassc from above:

Measure-Command {.\sassc.exe  -I bolt-bench bolt-bench\bolt-bench.scss output.css} | Select-Object -Property TotalSeconds

Feel free to add your own numbers if you happen to test it out. Here are some more numbers from my work laptop (core I7 6820HQ) also in comparison to older sassc I bundled with libsass installer (note that those are compiled with mingw which results in a bit slower executable than with MSVC):

3.6.0 - ~58s
3.6.3 - ~43s
WIP - ~1.5s

@mgreter I'm seeing even slower before numbers (~571.22s using v1.29.0 of Dart Sass) but even after running sassc using Wine (running a Mac here) I managed to compile the same benchmark in ~2 seconds.. In any case, it's
super exciting to see lot of this taking shape in the v4.0 alpha! 🎉🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants