For the snap spark-client that we support in DataPlatform, I would like to change the track name (or create a new one and unlist the current one) from latest to 3.4.
We are planning to only use the risks edge and beta, not sure it is relevant though at this point.
Let me know if I need to provide you any other information
Hi, setting the default track for snaps is done using snapcraft directly with snapcraft set-default-track. This assumes the 3.4 track already exists (I didn’t check, sorry, it’s Friday afternoon ). Then you can snapcraft close latest/stable and so on for all risks on latest.
Note that we really don’t recommend closing latest for snaps or charms - it makes things confusing
What’s spark-client’s release cadence, how often is a new major version (potentially requiring a new track) released? Is this documented somewhere by upstream?
Is there some commitment from upstream on maintenance of old versions? e.g. is 3.3 still supported with security updates? Will it continue to be supported now that 3.4 is out, and for how long?
Are new versions backwards-incompatible? meaning, if I was running 3.3 and try to install 3.4, will that just work, or do I need to migrate my data/configuration, or will things break horribly?
Thanks for your input. The structure of the track managements will closely follow what has already been implemented for other data platform products (e.g charmed-mysql, charmed-kafka, charmed-postgresql). Adding also @taurus in cc who has been coordinating this for DataPlatform few months ago. I’ll also address your questions when it comes to for this technology and the upstream project (also summarized here):
The release cadence for the upstream Spark project for point release is generally around 6-9 months. See here for the history of previous point releases. For major releases there is no fixed cadence.
point releases are generally supported upstream for 18 months.
New versions are generally (more on this later) backwards compatible and the Spark project strives to not break “stable” APIs, even at major version. However, in Spark releases there may be “experimental” and “alpha” API/features that may change across upgrades.
Sorry, I might have missed the last message. To try to answer your question, let me put some context here: Spark is a middleware that is used (as a library) in big data applications for enabling parallel data processing. Some applications may be written using a certain version, say 3.3, that is taken as dependency, and they may need some changes/tests/refactoring before jumping to the latest version, 3.4. Most of API should keep consistent between versions, but it may be that people (within their applications) may use experimental features that are subject to change, and we cannot swap/change the version under their feet automatically.
Just last question for my understanding and planning: when we will need to open a new track (supposedly when Spark 3.5 will be released sometime towards end of this year / beginning of next year), should I open a new thread of just add a comment on this one? Would there be a fast-track process?
It’s perfectly fine if you add a new comment on this thread. And, indeed, for any new track request which is similar in nature and cadence to the first one, there is a simplified, fast-track process (where no voting is required) as described here.