Almost a year ago I made an introductory post about my project scrapli. That post was (I think/hope?!) fairly well received, so I figured I would make an update as I have continued to spend a ton of time on scrapli, as well as some associated projects.
You can find the original post here
TL;DR - scrapli is still wicked fast, and all the other good stuff I mentioned before, but there is more stuff now! scrapli-cfg allows you to handle config merge/replacements even easier with scrapli, and scrapli-replay is all about helping you create meaningful tests for your projects that rely on scrapli. Finally, scrapligo has been created -- this is still fairly early, but I'm quite enjoying branching out into the world of go!
scrapli "core" updates/info:
- Big ol' documentation overhaul... where before there was a ginormous README, there is now some pretty mkdocs docs hosted on GitHub pages. There is still a lot of documentation, its just now in a more organized, easier on the eyes format.
- Added a custom built asynctelnet transport -- not useful if you are using ssh or don't care about asyncio, but I think its pretty cool for dealing with connections over console servers and the like.
- Added a "channel log" so you can log all the input/output that you would normally see if you were typing things in yourself.
- Created an opinionated "basic logging" setup -- you can call this function and you will automagically get basic logging for scrapli turned on and a log formatter applied so you get some easy to read log output. Generally I think users should handle their own logging setup, but for quick testing/debugging I think/hope this is handy.
- While the above things are cool, most scrapli related updates since the previous post have been internal and not something users would see -- there have been a myriad of improvements to overall structure of the project, organization/improvement on tests, improvements on handling very large outputs, standardization of ancillary stuff (setup.py/cfg, makefiles, CI bits, etc.) across all the scrapli repos, and probably a lot more that I'm forgetting!
scrapli-netconf:
- Big ol' documentation overhaul -- basically same thing as scrapli "core".
- Added support for ssh2 and paramiko transports (now supports all current scrapli SSH transports).
- As with scrapli "core" -- lots of internal improvements to generally just make things better but are not really user facing.
scrapli-community:
- Better docs again... you get the idea.
- Thanks to community contributions we now have the following platforms supported:
- Aethra ATOSNT
- Edgecore ECS
- Eltex ESR
- Fortinet WLC
- HP Comware
- Huawei VRP
- Mikrotik RouterOS
- Nokia SROS
- Ruckus FastIron
- Siemens ROXII
scrapli-cfg:
- scrapli-cfg is like NAPALM, but without any of the getters (except for get_config), and without any requirements other than scrapli.
- The main point of scrapli-cfg is to handle config management (merge/replace operations) for devices without needing any of the third party libraries (ex: pyeapi, eznc, etc.), and entirely "in channel" (telnet/ssh channel). This means you can do those config operations entirely over telnet -- no netconf required, no eapi, no scp required, etc., just a telnet/ssh connection. This also means you can manage configs entirely over console servers if you need to.
- In addition to the config management aspect you can also use scrapli-cfg to fetch configs (or checkpoint files for nxos) -- there very intentionally will not be other getters though as that introduces a fairly significant amount of additional work!
scrapli-replay:
- scrapli-replay is all about testing -- the main component is a pytest plugin that will automagically patch and record scrapli connections that are opened during the course of a test. The recorded data is stored as a "session" and subsequent tests are patched to basically "replay" that session from the recorded data instead of actually connecting to devices. This means that you can write tests with real devices (lab or actually prod, but something you can connect to for real), record those sessions, and then store the session data in your repository. When you run your tests in your CI system (which almost certainly has no access to your network (lab or otherwise)) the sessions are replayed from the stored data -- no network access needed! (don't worry, no password data is stored in the session output)
- There is also a "collector" that allows you to collect and store the output from a set of provided commands -- this data can then be used to create a mock ssh server that looks and feels like a "real" network device (please see the docs for scrapli-replay I wrote about what this actually means fairly extensively) that you can connect to and send commands to/get output from, but is simply a python program running an SSH server locally!
nornir-scrapli:
- Added scrapli-netconf tasks
- Added (in current develop branch, will be on pypi for the 2021.07.30 release) scrapli-cfg tasks
scrapligo:
- Not too long after the original scrapli reddit post I started writing scrapli in go as a learning exercise. I got things working, but it was messy and I never ended up publishing it. Over the past few weeks I started the scrapligo project again from scratch, and this time I've actually published it!
- scrapligo is what it sounds like... its pretty much a port of scrapli and scrapli-netconf directly into go...
- The primary transport mechanism is the "system" transport (basically a wrapper around
/bin/ssh
), but it also supports the built in go crypto/ssh client (you can think of that kinda like paramiko but standard library if you are more familiar with Python things). - All(? or if not all, very nearly all) of the public methods of the python version of scrapli exist in the go version -- but of course with idiomatic go naming -- so no more "send_commands", its now "SendCommands"...
- The public facing API is mostly the same as its python counterpart, but again, with more idiomatic go things -- so now there are "options" for the send methods, and there are
NewXYZ
functions to create connection instances, etc.. - Huuuuuuuge thanks to Roman Dodin for his help on lots of things -- from answering go noob questions that I've asked, for creating a super cool logo for scrapligo, and of course for his contributions to the project!
- The primary transport mechanism is the "system" transport (basically a wrapper around
- This is still a young project and there is a lot of room for improvements, particularly in the testing and documentation departments (which if you know anything about me, you know I think are the most important parts!) -- I hope to invest time in improving these, though it will likely be much slower development than the Python projects as those are still my primary focus.
Links to all the things:
I'd love to hear any feedback or whatever thoughts folks have to offer (here, twitter, slack, linkedface, whatever works for you). It has been quite the journey building and maintaining these projects, and I hope some folks can find some/all of them useful!
No comments:
Post a Comment