Great idea! Also terrifying :) Given how often I accidentally commit the wrong text in vscode, I shudder to think of the damage I could do with this on my shell, hah! What safety measures are there/could there be?
I don't know how painful it would be, but I think I could adapt to a short delay of maybe 50-100ms where the shell won't respond to Return presses, only after accepting a completion. Just long enough ideally to make me re-read what I entered.
You know I googled this immediately after I posted it, and you're absolutely right, but a good chunk of the syntax still kind of looks like regular expressions so I don't think I was too far off!
Regular expressions and glob(7) expressions look superficially similar, but it is mostly an illusion; they both use * and ? as common metacharacters, but both of those characters mean different things in the two systems. Only the [ and ] characters are used more similarly, but even those have their differences. The two syntaxes only have one common function which functions identically in the two systems, but it is invoked by different characters; specifically the ? (question mark) character in glob(7) exactly corresponds to the . (full stop) character in regular expressions.
To be pedantic: You're conflating the syntax of regular exprssions, with the (computer science) concept. Glob patterns are a kind of very resticted regular pexressions. Ksh extended the glob syntax so it has the full power of regular expressions in the computer science sense - though without all the extensions of modern regular expressions.
“Globs do not include syntax for the Kleene star which allows multiple repetitions of the preceding part of the expression; thus they are not considered regular expressions, which can describe the full set of regular languages over any given finite alphabet.”
You're right, normal globs are not regular expressions regardless of syntax. Parent is correct too, ksh globs are regular expressions. See shopt -s extglob in bash, setopt kshglob in zsh (extendedglob for zsh's native, IMO inferior, quantifier syntax).
Yeah, I generally will use `find . -name "<my pattern"` nowadays, just so I can see all the potentially recursive files as well, and then when I'm 100% sure that what I'm doing is good, I will pipe that into xargs or parallel.
My point was that I don't feel like Unix really stops you from doing destructive scary stuff. It seems like it's perfectly happy to let you break your machine.
I mostly agree, but sometimes I wish that `rm` would have default to "confirm before destroying", and add a flag like `-y` to not prompt, more or less like how `apt` works on Ubuntu.
This seems worse then remembering, because you still have to remember that your safe-by-default command is unsafe-by-default everywhere you're not usually.
The presentation suggested in this very thread would work better for me. I generally pay pretty close attention to package apt/pacman/dnf output because it is all right there in front of me.
Perhaps ironically, if those programs asked me for each change I would just hold Y until it went away. Needing confirmation of each item is why I glaze over.
I set up hourly borg backups instead. That resulted in me aliasing rm='rm -rf' without a worry in the world. So far, I made about ten recoveries and have never lost important data.
What is sad is that it has to rely on a library of completion specs. It shows what kind of stone age foundations we are building on top of. The world would have been a much better place if CLIs themselves defined strict interfaces using standard data structures. Not only for auto completion, also gives much more accurate error checking of bash scripts. Same goes for data piped in and out of command.
To do this, you'd have to have some sort of common runtime, and nearly all command-line utilities would have to be compiled with said runtime, so that the parameters, names, and types of parameters for said utilities could be returned, parsed, and verified as a data structure, without having to manually parse the output of some `command --help` invocation.
It would thoroughly dispense with the whole argument parsing routine in the first place, which could now be done in a standardised manner; arguments could be defined as strict types with possible values limited to a set, and best of all, they wouldn't just be a dictionary/hash-map of strings to strings—the arguments could actually be named variables with values, in the context of the called utility.
It would certainly be quite a powerful shell, and I'm sure something similar has already been done. I just can't think of an example..... A Powerful shell, running on some Common Language Runtime; I wonder what it could be.
It's sad. If the world of software had better coordination and cooperation, we would be able to get so much more done. Instead we're constantly dumping and re-parsing the same data, re-implementing the same protocols and algorithms in dozens of different languages, each time with different quirks. Fixing the same bugs again and again.
I don't mind it, it's just another runtime that works on Linux and BSD. I have my neovim infected with extensions made in Node.js, Python, Lua, my weechat is infected with extensions made in Python, Perl, Node.js, and my zsh is infected with extensions with components made in Rust, C, and Python. Just make sure to pay attention to shell start times and RAM usage.
Alternatively, if you simply wish to occasionally bring Copilot into your shell, you should know that Ctrl+X Ctrl+E (on bash) / Alt+E (on fish) will open your current shell line up in $EDITOR, which you may set to Vim or Neovim.
From there, :wq will drop the text back into your command line. If you have Copilot set up in either of those, then it will also work here.
I know from working on https://github.com/hiAndrewQuinn/shell-bling-ubuntu that Neovim's LazyVim setup now supports Copilot out of the box now. I never had much trouble setting up the Vim plugin either. YMMV.
Fig autocomplete is really cool! With the Fig aquisition, what do you forsee happening to Fig's autocomplete offering in the long and short term? My impression is that Amazon was interested more in the other parts of your tech, with the scripts and automation capabilities.
It's pretty cool but still seems a little rough around the edges. My simple test case was to launch into the shell and see if it loaded any of my normal shell configs like highlighting or auto-suggestions (based on my history) but it seems to just be its own shell entirely. It also doesn't seem to play nicely with `zsh` since I tried to do a `..` (which is a common `zsh` alias for "cd .." which got a `zsh:1: permission denied: ..` and then I tried to run `cd ..` it did nothing.
I'd be curious how this compares with Fish, which has autocomplete as well. It's worked fantastic for me so far. Once you have it, it's hard to go back!
Awesome in principle, but it appears that it has to take over your prompt to work. Any custom prompt (starship) is replaced, zsh-syntax-highlighting does not work anymore etc.
There is really not much code to speak of, and a quick perusal didn't yield anything sus. Initial guess is that it's not very efficient, but I really didn't look that closely.
I wish they had written inShellisense in a more efficient programming language than TypeScript.
I recall disabling bash_completion.sh on my computer some time ago due to its negative impact on the startup speed of each iTerm2 session and the delay it introduced when using the <TAB> key for autocompletion.
Before I disabled this feature, I consistently experienced delays of over 300ms between triggering autocomplete and receiving the actual results. I must admit that this was on an Intel Core i7, so I assume the performance is much better with newer processors. However, even after more than two years without bash_completion.sh, I have already committed many command line tool flags to memory so I would only consider using a tool written in a compiled programming language that can provide autocomplete in 100ms or less, potentially requiring the inclusion of hardcoded information in the binary.
This was the first thing I noticed, too. Why TypeScript? Is it: a) efficient enough, esp. compared to a Bash/Zsh/PWSH alternative, that spawning a JS interpreter for each autocomplete is no biggie? Is b) TypeScript just much more efficient than I thought? Or c) is TypeScript Microsoft's hammer, and everything looks like a nail?
I tend to think since VSCode plugins are javascript all new tooling from Microsoft seems to be written in typescript. As a non front end dev though `npm install` is an instant turnoff for me.
I agree. I was going to install and backed out when I saw `npm install` as well. I am wondering if my priors need to be updated, though (namely: JavaScript is slow, inefficient, and unsuited for anything outside of webdev).
I think javascript is plenty fast these days. My only problem with Typescript/Javascipt is the toolchain is very complex/confusing to someone on the outside. In my experience, Rust/Go or even python is easier to get into if you're not living it every day. Besides familiarity, I'm not sure why someone would choose Typescript for non-web work.
It doesn't make any sense to even complain about Typescript in the first place here. Typescript itself is not being ran when this command is invoked, the npm package for this doesn't even distribute .ts files or a type definitions file.
This is latching on the to the word "Typescript" and immediately beginning the whining.
I mean, right. But back to what I was saying: why typescript? The fact you’re right doesn’t make my question absurd? In fact it kind of completely validates it.
As a user of this project, Typescript is 100% irrelevant, it's compiled to JS and is just a nodejs process and regular javascript at that point. You can go look at what's installed via `npm list -g`, find the path and notice there's not a single Typescript file in the build output.
I wish those aboard the TS/JS hate-train knew what they were even complaining about.
>JS and is just a nodejs process and regular javascript at that point
Yes, and that's what people are complaining about.
People use the wrod typescript in this thread because, well, it's what shows up on github, but the same argument would come up if it was in pure js.
People have problem with the fact that you are running a nodejs for the terminal enhancement, not what language it uses
Don’t bring TypeScript into this. It is very possible to write sub 100ms procedures in TS, but an inelegant algorithm will be slow in any language, eventually.
> It is very possible to write sub 100ms procedures in TS, […]
I won’t dispute this statement since I currently lack the means to assess inshellisense. Would it be possible for you (or someone with a functional Node + NPM setup) to install inshellisense and share the actual performance figures? You could use a tool like hyperfine (https://github.com/sharkdp/hyperfine) for this purpose.
As an attempt to test this myself, I used a Docker image (version 21.1.0-bookworm from https://hub.docker.com/_/node/). The TypeScript tool installed without any issues, along with the binding, which simply adds the following line into ~/.bashrc:
However, when I initiated a new Bash session within the same Docker container to activate the updated Bash configuration, I encountered the following error:
bash: /root/.inshellisense/key-bindings.bash: line 1: syntax error near unexpected token `$'{\r''
'ash: /root/.inshellisense/key-bindings.bash: line 1: `__inshellisense__() {
Due to this issue, I am unable to perform a performance test using hyperfine.
The version of Bash available in this Docker image is 5.2.15(1)-release.
The claim should be tested by measuring the latency of a well-engineered TS-written language service provider. The language services for TypeScript themselves run in far fewer than 100ms, and that is a far more dynamic and complex use case. The shell language service on the other hand is trivially cacheable and on the whole quote simplistic, at least compared to the full TS semantics.
Jakob Nielsen says that 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result...
>>In gaming, delay of 100ms is huge enough for anyone besides the most casual players to notice.<<
I agree.
I used to play a drum beat game with my daughter and son-in-law, who are both professional musicians, and they could hit beats within a 10ms window, while I struggled to get under 100ms.
But 100ms seems like a reasonable upper limit for autocomplete to me.
PS: My son-in-law is a professional drummer and can't play Rush YYZ on Rock Band (just had to include this tidbit cause it'll piss him off)
Man, we are doing a CLI autocomplete for duck sake, and You guys want to spin ENTIRE FUCKIN BROWSER for it, and then argue "well, it isn't THAT slow, so what's the problem".
I'm not some extreme purist that thinks everything should be made in C and then hand optimized in assembly, but Jesus, how wasteful can You get before the slightest amount of self awareness starts to kick in.
And for what reason, to save a week of engineering time learning GOlang?
Or, spend the 10 minutes learning that Node isn’t spinning up an entire browser.
This thread is filled with a shitload of this knee jerk idiocy. If you want to promote Go, at least learn how to complain about factual things instead of writing some fiction and complaining about that. Node has legit issues you could complain about. FFS, whine better.
Beyond the not-using-typescript for CLI tooling difference, it would be interesting for a comparison against carapace-bin, another shell agnostic completer: https://github.com/rsteube/carapace-bin (written in Go, since this thread includes other discussion about the choice of typescript for Inshellisense).
From a quick peek, carapace-bin supports more shells (including and powering the one I use, nushell).
Ctrl r is okay and it is a very convenient if you already typed the command.
Inshellisense seems to help with knowing what subcommands, flags and file paths are available, and it even provides a small docs helper for the flags and commands.
If you didn't try these tools, I'd encourage you take a look, it helped me a lot, especially when I work with tools whose commands I don't know by heart or used a while ago.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
Ya, cool name, but worse privacy than gorilla, more awkward than llm.sh and why would I ever get the Microsoft version of something I can get elsewhere without the worry that I'm about to be devoured by a corporate anglerfish.
It is withfig/autocomplete. The only difference from their runtime is that this gives results directly in the console, instead of using a graphical overlay. I believe that means you could use this in an SSH session.
* shell config is created with CRLF https://github.com/microsoft/inshellisense/issues/8
* changing directory doesn't work https://github.com/microsoft/inshellisense/issues/5