Lineage OS Changelog 28 – Fantastic Fourteen, Amazing Applications, Undeniable User-Experience

WRITTEN ON FEBRUARY 14, 2024 BY NOLEN JOHNSON (NPJOHNSON)

21 – Finally old enough to drink (at least in the US)!

Hey y’all! Welcome back!

We’re a bit ahead of schedule this year, we know normally you don’t expect to hear from us until April-ish.

This was largely thanks to some new faces around the scene, some old faces stepping up to the plate, and several newly appointed Project Directors!

With all that said, we have been working extremely hard since Android 14’s release last October to port our features to this new version of Android. Thanks to our hard work adapting to Google’s largely UI-based changes in Android 12/13, and Android 14’s dead-simple device bring-up requirements, we were able to rebase our changes onto Android 14 much more efficiently.

This lets us spend some much overdue time on our apps suite! Applications such as Aperture had their features and UX improved significantly, while many of our aging apps such as Jelly, Dialer, Contacts, Messaging, LatinIME (Keyboard), and Calculator got near full redesigns that bring them into the Material You era!

…and last but not least, yet another new app landed in our apps suite! Don’t get used to it though, or maybe do, we’re not sure yet.

Now, let’s remind everyone about versioning conventions – To match AOSP’s versioning conventions, and due to the fact it added no notable value to the end-user, we dropped our subversion from a branding perspective.

As Android has moved onto the quarterly maintenance release model, this release will be “LineageOS 21”, not 21.0 or 21.1 – though worry not – we are based on the latest and greatest Android 14 version, QPR1.

Additionally, to you developers out there – any repository that is not core-platform, or isn’t expected to change in quarterly maintenance releases will use branches without subversions – e.g., lineage-21 instead of lineage-21.0.

New Features!

  • Security patches from January 2023 to February 2024 have been merged to LineageOS 18.1 through 21.
  • Glimpse of Us: We now have a shining new app, Glimpse! It will become the default gallery app starting from LineageOS 21
  • An extensive list of applications were heavily improved or redesigned:
    • Aperture: A touch of Material You, new video features, and more!
    • Calculator: Complete Material You redesign
    • Contacts: Design adjustments for Material You
    • Dialer: Large cleanups and code updates, Material You and bugfixes
    • Eleven: Some Material You design updates
    • Jelly: Refreshed interface, Material You and per-website location permissions
    • LatinIME: Material You enhancements, spacebar trackpad, fixed number row
    • Messaging: Design adjustments for Material You
  • A brand new boot animation by our awesome designer Vazguard!
  • SeedVault and Etar have both been updated to their newest respective upstream version.
  • WebView has been updated to Chromium 120.0.6099.144.
  • We have further developed our side pop-out expanding volume panel.
  • Our Updater app should now install A/B updates much faster (thank Google!)
  • We have contributed even more changes and improvements back upstream to the FOSS Etar calendar app we integrated some time back!
  • We have contributed even more changes and improvements back upstream to the Seedvault backup app.
  • Android TV builds still ship with an ad-free Android TV launcher, unlike Google’s ad-enabled launcher – most Android TV Google Apps packages now have options to use the Google ad-enabled launcher or our ad-restricted version.
  • Our merge scripts have been largely overhauled, greatly simplifying the Android Security Bulletin merge process, as well as making supporting devices like Pixel devices that have full source releases much more streamlined.
  • Our extract utilities can now extract from OTA images and factory images directly, further simplifying monthly security updates for maintainers on devices that receive security patches regularly.
  • LLVM has been fully embraced, with builds now defaulting to using LLVM bin-utils and optionally, the LLVM integrated assembler. For those of you with older kernels, worry not, you can always opt-out.
  • A global Quick Settings light mode has been developed so that this UI element matches the device’s theme.
  • Our Setup Wizard has seen adaptation for Android 14, with improved styling, more seamless transitions, and significant amounts of legacy code being stripped out.
  • The developer-kit (e.g. Radxa 0, Banana Pi B5, ODROID C4, Jetson X1) experience has been heavily improved, with UI elements and settings that aren’t related to their more restricted hardware feature-set being hidden or tailored!

Amazing Applications!

Calculator

calculator

Our Calculator app has received a UI refresh, bringing it in sync with the rest of our app suite, as well as a few new features:

  • Code cleanup
  • Reworked UI components to look more modern
  • Added support for Material You
  • Fixed some bugs

Glimpse

glimpse

We’ve been working on a new gallery app, called Glimpse, which will replace Gallery2, the AOSP default gallery app.

Thanks to developers SebaUbuntu, luca020400 and LuK1337 who started the development, together with the help of designer Vazguard.

We focused on a clean, simple and modern-looking UI, designed around Material You’s guidelines, making sure all the features that you would expect from a gallery app are there.

It’ll be available on all devices starting from LineageOS 21.

Aperture

This has been the first year for this new application and we feel it has been received well by the community. As promised, we have continued to improve it and add new features, while keeping up with Google’s changes to the CameraX library (even helping them fix some bugs found on some of our maintained devices). We’d like to also thank the community for their work on translations, especially since Aperture strings changed quite often this year.

Here’s a quick list of some of the new features and improvements since the last update:

  • Added a better dialog UI to ask the user for location permissions when needed
  • UI will now rotate to follow the device orientation
  • Added Material You support
  • Improved QR code scanner, now with support for Wi-Fi and Wi-Fi Easy Connect™ QR codes
  • Added support for Google Assistant voice actions
  • Added photo and video mirroring (flipping) options
  • Audio can be muted while recording a video
  • Better error handling, including when no camera is available
  • Added configurable volume button gestures
  • The app will now warn you if the device overheats and is now able to automatically stop recording if the device temperature is too high
  • Added an information chip on top of the viewfinder to show some useful information, like low battery or disabled microphone
  • Added some advanced video processing settings (noise reduction, sharpening, etc.)
  • You can now set the flash to torch mode in photo mode by long-pressing the flash button
  • Added support for HDR video recording

Jelly

jelly

Our browser app has received a UI refresh, bringing it in sync with the rest of our app suite, as well as a few new features:

  • Code cleanup
  • Reworked UI components to look more modern
  • Added support for Material You
  • Fixed some bugs regarding downloading files
  • Added Brave as a search engine and suggestions provider
  • Dropped Google encrypted search engine, as Google defaults to HSTS now
  • Baidu suggestion provider now uses HTTPS
  • Implemented per-website location permissions

Dialer, Messaging, and Contacts

Dialer

Since AOSP abandoned deprecated the Dialer, we have taken over the code base and did heavy cleanups, updating to newer standards (AndroidX) and redesigning:

  • Code cleanup
  • Changed to using Material You design
  • Proper dark and light themes
  • Several bugfixes, specifically with number lookups and the contact list

While Messaging was also deprecated by AOSP, at least the Contacts app was not. Nonetheless we gave both of them an overhaul and made them also follow the system colors and look more integrated.

Careful Commonization

Several of our developers have worked hard on SoC-specific common kernels to base on that can be merged on a somewhat regular basis to pull in the latest features/security patches to save maintainers additional effort.

Go check them out and consider basing your device kernels on them!

Supported SoCs right now are:

SoC (system-on-chip)Kernel VersionAndroid Version 
Qualcomm MSM89963.1811 
Qualcomm MSM8998/MSM89964.413 
Qualcomm SDM8454.913 
Qualcomm SM81504.1413 
Qualcomm SDM6604.1913 
Qualcomm SM82504.1913 
Qualcomm SM83505.413 
Qualcomm SM84505.1013– Coming soon!
Qualcomm SM85505.1513 

Additionally, many legacy devices require interpolating libraries that we colloquially refer to as “shims” – these have long been device and maintainer managed, but this cycle we have decided to commonize them to make the effort easier on everyone and not duplicate effort!

You can check it out here and contribute shims that you think other devices may need or add additional components to additional shims and compatibility layers provided via Gerrit!

Deprecations

Overall, we feel that the 21 branch has reached feature and stability parity with 20 and is ready for initial release.

For the first time in many cycles, all devices that shipped LineageOS 19.1 were either promoted or dropped by the maintainer by the time of this blog post, so LineageOS 19.1 was retired naturally. As such, no new device submissions targeting the 19.1 branch will be able to ship builds (you can still apply and fork your work to the organization, though!).

LineageOS 18.1 builds were still not deprecated this year, as Google’s somewhat harsh requirements of BPF support in all Android 12+ device’s kernels meant that a significant amount of our legacy devices on the build-roster would have died.

LineageOS 18.1, is still on a feature freeze, and building each device monthly, shortly after the Android Security Bulletin is merged for that month.

We will allow new LineageOS 18.1 submissions to be forked to the organization, but we no longer will allow newly submitted LineageOS 18.1 devices to ship.

LineageOS 21 will launch building for a decent selection of devices, with additional devices to come as they are marked as both Charter compliant and ready for builds by their maintainer.

Upgrading to LineageOS 21

To upgrade, please follow the upgrade guide for your device by clicking on it here and then on “Upgrade to a higher version of LineageOS”.

If you’re coming from an unofficial build, you need to follow the good ole’ install guide for your device, just like anyone else looking to install LineageOS for the first time. These can be found at the same place here by clicking on your device and then on “Installation”.

Please note that if you’re currently on an official build, you DO NOT need to wipe your device, unless your device’s wiki page specifically dictates otherwise, as is needed for some devices with massive changes, such as a repartition.

Download portal

While it has been in the making for quite a while and already released a year ago, it’s still news in regards to this blog post. Our download portal has been redesigned and also gained a few functional improvements:

  • Dark mode
  • Downloads of additional images (shown for all devices but not used on all of them, read the instructions to know which ones you need for your device’s installation!)
  • Verifying downloaded files (see here) – if you go with any download not obtained from us, you can still verify it was originally signed by us and thus untampered with

Wiki

The LineageOS Wiki has also been expanded throughout the year and now offers, in addition to the known and tested instructions for all supported devices, some improvements:

  • The device overview allows filtering for various attributes you might be interested in a device (please note: choosing a device only based on that list still does not guarantee any device support beyond the point of when you chose it)
  • The device overview now lists variants of a device and other known marketing names in a more visible way, also allowing for different device information and instructions per variant to be shown
  • The installation instructions have been paginated, giving users less chance to skip a section involuntarily

In addition to that we’d like to take this time to remind users to follow instructions on their device’s respective Wiki Page given the complexity introduced by AOSP changes like System-As-Root, A/B Partition Scheme, Dynamic Partitions, and most recently Virtual A/B found on the Pixel 5 and other devices launching with Android 11, the instructions many of you are used to following from memory are either no longer valid or are missing very critical steps. As of 16.0, maintainers have been expected to run through the full instructions and verify they work on their devices. The LineageOS Wiki was recently further extended, and maintainers were given significantly more options to customize their device’s specific installation, update, and upgrade instructions.

Developers, Developers, Developers

Or, in this case, maintainers, maintainers, maintainers. We want your device submissions!

If you’re a developer and would like to submit your device for officials, it’s easier than ever. Just follow the instructions here.

The above also applies to people looking to bring back devices that were at one point official but are no longer supported – seriously – even if it’s not yet completely compliant, submit it! Maybe we can help you complete it.

After you submit, within generally a few weeks, but in most cases a week, you’ll receive some feedback on your device submission; and if it’s up to par, you’ll be invited to our communications instances and your device will be forked to LineageOS’s official repositories.

Don’t have the knowledge to maintain a device, but want to contribute to the platform? We have lots of other things you can contribute to. For instance, our apps suite is always looking for new people to help improve them, or you can contribute to the wiki by adding more useful information & documentation. Gerrit is always open for submissions! Once you’ve contributed a few things, send an email to devrel(at)lineageos.org detailing them, and we’ll get you in the loop.

Also, if you sent a submission via Gmail over the last few months, due to infrastructural issues, some of them didn’t make it to us, so please resend them!

Generic Targets

We’ve talked about these before, but these are important, so we will cover them again.

Though we’ve had buildable generic targets since 2019, to make LineageOS more accessible to developers, and really anyone interested in giving LineageOS a try, we’ve documented how to use them in conjunction with the Android Emulator/Android Studio!

Additionally, similar targets can now be used to build GSI in mobile, Android TV configurations, and Android Automotive (we’ll talk more about this later) making LineageOS more accessible than ever to devices using Google’s Project Treble. We won’t be providing official builds for these targets, due to the fact the user experience varies entirely based on how well the device manufacturer complied with Treble’s requirements, but feel free to go build them yourself and give it a shot!

Please note that Android 12 (and by proxy Android 13/14) diverged GSI and Emulator targets. Emulator targets reside in lineage_sdk_$arch, while GSI targets reside in lineage_gsi_$arch.

Translations

Bilingual? Trilingual? Anything-lingual?

If you think you can help translate LineageOS to a different language, jump over to our wiki and have a go! If your language is not supported natively in Android, reach out to us on Crowdin and we’ll take the necessary steps to include your language. For instance, LineageOS is the first Android custom distribution that has complete support for the Welsh (Cymraeg) language thanks to its community of translators.

Please, contribute to translations only if you are reasonably literate in the target language; poor translations waste both our time and yours.

Build roster

Added 21 devices

Device nameWikiMaintainersMoved from
ASUS Zenfone 5Z (ZS620KL)Z01Rrohanpurohit, Jackeagle, ThEMarD20
Banana Pi M5 (Tablet)m5_tabnpjohnson, stricted20
Essential PH-1matahaggertk, intervigil, npjohnson, rashed20
F(x)tec Pro¹ Xpro1xBadDaemon, bgcngm, mccreary, npjohnson, qsnc, tdm20
F(x)tec Pro¹pro1BadDaemon, bgcngm, intervigil, mccreary, npjohnson, tdm20
Fairphone 4FP4mikeioannina20
Google Pixel 2 XLtaimenchrmhoffmann, Eamo5, npjohnson, jro197920
Google Pixel 2walleyechrmhoffmann, Eamo5, npjohnson, jro197920
Google Pixel 3 XLcrosshatchrazorloves, cdesai, intervigil, mikeioannina20
Google Pixel 3bluelinerazorloves, cdesai, intervigil, mikeioannina20
Google Pixel 3a XLbonitocdesai, mikeioannina, npjohnson20
Google Pixel 3asargocdesai, mikeioannina, npjohnson20
Google Pixel 4 XLcoralcdesai, Eamo5, mikeioannina, npjohnson20
Google Pixel 4flamecdesai, Eamo5, mikeioannina, npjohnson20
Google Pixel 4a 5Gbramblealeasto, mikeioannina20
Google Pixel 4asunfishPeterCxy, cdesai, mikeioannina20
Google Pixel 5redfinaleasto, mikeioannina20
Google Pixel 5abarbetaleasto, mikeioannina20
Google Pixel 6 Proravenmikeioannina20
Google Pixel 6oriolemikeioannina20
Google Pixel 6abluejaymikeioannina20
Google Pixel 7 Procheetahmikeioannina, npjohnson20
Google Pixel 7panthermikeioannina, neelc20
Google Pixel 7alynxmikeioannina, niclimcy20
Google Pixel 8 Prohuskymikeioannina 
Google Pixel 8shibamikeioannina 
Google Pixel Foldfelixmikeioannina 
Google Pixel TablettangorproLuK1337, mikeioannina, npjohnson, neelc20
Google Pixel XLmarlinnpjohnson, electimon20
Google Pixelsailfishnpjohnson, electimon20
HardKernel ODROID-C4 (Tablet)odroidc4_tabnpjohnson, stricted20
LG G5 (International)h850aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G5 (T-Mobile)h830aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G5 (US Unlocked)rs988aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G6 (EU Unlocked)h870aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G6 (T-Mobile)h872aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G6 (US Unlocked)us997aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG V20 (AT&T)h910aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (GSM Unlocked – DirtySanta)us996daleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (GSM Unlocked)us996aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (Global)h990aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (Sprint)ls997aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (T-Mobile)h918aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (Verizon)vs995aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V30 (Unlocked) / LG V30 (T-Mobile)joanlifehackerhansol, SGCMarkus20
Motorola edge 20 propstarnpjohnson, SGCMarkus20
Motorola edge 20berlinnpjohnson, SGCMarkus20
Motorola edge 2021berlnaSyberHexen20
Motorola edge 30dubaithemard, sb6596, Demon00020
Motorola edge s / Motorola moto g100niodianlujitao20
Motorola moto g200 5G / Motorola Edge S30xpengthemard, rogers260220
Motorola moto g32devonDhina17, mikeioannina20
Motorola moto g42hawaoDhina17, mikeioannina20
Motorola moto g52rhodeDhina17, mikeioannina20
Motorola moto g6 plusevertjro197920
Motorola moto g7 playchannelSyberHexen, deadman96385, erfanoabdi, npjohnson20
Motorola moto g7 pluslakejro1979, npjohnson20
Motorola moto g7 poweroceanSyberHexen, erfanoabdi, npjohnson20
Motorola moto g7rivererfanoabdi, npjohnson, SyberHexen20
Motorola moto x4paytonerfanoabdi, ThEMarD, electimon20
Motorola moto z2 force / Motorola moto z (2018)nasherfanoabdi, npjohnson, qsnc20
Motorola moto z3 playbeckhamjro197920
Motorola moto z3messinpjohnson20
Motorola one actiontroikaStricted, npjohnson20
Motorola one vision / Motorola p50kaneStricted, npjohnson20
Nokia 6.1 (2018)PL2npjohnson, theimpulson20
Nokia 6.1 PlusDRGnpjohnson, theimpulson20
Nubia Mini 5GTP1803ArianK16a, npjohnson20
OnePlus 11 5Gsalamibgcngm 
OnePlus 5cheeseburgertrautamaki20
OnePlus 5Tdumplingtrautamaki, qsnc20
OnePlus 6enchiladaLuK133720
OnePlus 6TfajitaEdwinMoq20
OnePlus 7 ProguacamoleLuK1337, Tortel20
OnePlus 7guacamolebshantanu-sarkar20
OnePlus 7T Prohotdogqsnc20
OnePlus 7ThotdogbLuK133720
OnePlus 8 ProinstantnoodlepLuK133720
OnePlus 8instantnoodlejabashque20
OnePlus 8TkebabLuK133720
OnePlus 9 ProlemonadepLuK1337, bgcngm, mikeioannina20
OnePlus 9lemonademikeioannina, tangalbert919, ZVNexus20
OnePlus 9Rlemonadesmikeioannina20
OnePlus 9RTmartinimikeioannina20
OnePlus NordaviciiMajorP93, KakatkarAkshay20
Radxa Zero (Tablet)radxa0_tabbgcngm, npjohnson, stricted20
Razer Phone 2auramikeioannina, npjohnson20
Razer Phonecherylmikeioannina, npjohnson20
Samsung Galaxy Tab A7 10.4 2020 (LTE)gta4lchrmhoffmann20
Samsung Galaxy Tab A7 10.4 2020 (Wi-Fi)gta4lwifichrmhoffmann20
Samsung Galaxy Tab S5e (LTE)gts4lvbgcngm, LuK133720
Samsung Galaxy Tab S5e (Wi-Fi)gts4lvwifiLuK1337, bgcngm20
Sony Xperia 1 IIpdx203hellobbn20
Sony Xperia 1 IIIpdx215hellobbn20
Sony Xperia 10 PlusmermaidLuK133720
Sony Xperia 10kirinLuK133720
Sony Xperia 5 IIpdx206kyasu, hellobbn20
Sony Xperia 5 IIIpdx214kyasu, hellobbn20
Sony Xperia XA2 PlusvoyagerLuK133720
Sony Xperia XA2 UltradiscoveryLuK133720
Sony Xperia XA2pioneerLuK1337, Stricted, cdesai20
Xiaomi Mi 5geminibgcngm, ikeramat20
Xiaomi Mi 5s PlusnatriumLuK133720
Xiaomi Mi 6sagitArianK16a20
Xiaomi Mi 8 Explorer Editionursabgcngm20
Xiaomi Mi 8 Proequuleusbgcngm20
Xiaomi Mi 8dipperinfrag20
Xiaomi Mi 9 SEgrusSebaUbuntu20
Xiaomi Mi CC 9 / Xiaomi Mi 9 Litepyxisceracz20
Xiaomi Mi CC9 Meitu Editionvela0xCAFEBABE20
Xiaomi Mi MIX 2chironmikeioannina20
Xiaomi Mi MIX 2Spolarisbgcngm20
Xiaomi Mi MIX 3perseusbgcngm, rtx4d20
Xiaomi Poco F1berylliumbgcngm, warabhishek20
Xiaomi Redmi 3S / Xiaomi Redmi 3X / Xiaomi Redmi 4 (India) / Xiaomi Redmi 4X / Xiaomi Redmi Note 5A Prime / Xiaomi Redmi Y1 PrimeMi89370xCAFEBABE20
Xiaomi Redmi 4A / Xiaomi Redmi 5A / Xiaomi Redmi Note 5A Lite / Xiaomi Redmi Y1 LiteMi89170xCAFEBABE20
Xiaomi Redmi 8 / Xiaomi Redmi 8A / Xiaomi Redmi 8A DualMi4390xCAFEBABE20

Added 20 devices

Device nameWikiMaintainersMoved from
10.or GGkardebayan 
ASUS ZenFone 8sakeZVNexus, Demon000, DD3Boh19.1
ASUS Zenfone Max Pro M1X00TDVivekachooz19.1
BQ Aquaris X ProbardockproQuallenauge, jmpfbmx18.1
BQ Aquaris XbardockQuallenauge, jmpfbmx18.1
Banana Pi M5 (Android TV)m5stricted 
Dynalink TV Box 4K (2021)wadenpjohnson, bgcngm, stricted, webgeek1234, deadman96385, trautamaki, luca020400, aleasto19.1
Fairphone 3 / Fairphone 3+FP3dk1978, teamb5819.1
Google ADT-3deadpoolnpjohnson, stricted, webgeek1234, deadman96385, trautamaki, luca020400, aleasto19.1
HardKernel ODROID-C4 (Android TV)odroidc4stricted 
Motorola one fusion+ / Motorola one fusion+ (India)liberWilliam, Hasaber819.1
Motorola one zoomparkerHasaber819.1
Nubia Play 5G / Nubia Red Magic 5G Litenx651jCyborg2017 
Nubia Red Magic 5G (Global) / Nubia Red Magic 5G (China) / Nubia Red Magic 5S (Global) / Nubia Red Magic 5S (China)nx659jDD3Boh 
Nubia Red Magic Marsnx619jCyborg2017 
Nubia Red Magicnx609jCyborg2017 
Nubia Z17nx563jBeYkeRYkt, Cyborg201719.1
Nubia Z18 Mininx611jCyborg201719.1
Nubia Z18nx606jCyborg2017 
OnePlus Nord N200dretangalbert91919.1
Radxa Zero (Android TV)radxa0bgcngm, npjohnson, stricted 
SHIFT SHIFT6mqaxolotlamartinz, joey, mikeioannina19.1
Samsung Galaxy A52 4Ga52qSimon151119.1
Samsung Galaxy A52s 5Ga52sxqSimon1511 
Samsung Galaxy A72a72qSimon151119.1
Samsung Galaxy A73 5Ga73xqSimon1511 
Samsung Galaxy F62 / Samsung Galaxy M62f62Linux4 
Samsung Galaxy M52 5Gm52xqSimon1511 
Samsung Galaxy Note 9crownltebaddar9017.1
Samsung Galaxy Note10d1Linux419.1
Samsung Galaxy Note10+ 5Gd2xLinux419.1
Samsung Galaxy Note10+d2sLinux419.1
Samsung Galaxy S10 5GbeyondxLinux419.1
Samsung Galaxy S10beyond1lteLinux419.1
Samsung Galaxy S10+beyond2lteLinux419.1
Samsung Galaxy S10ebeyond0lteLinux419.1
Samsung Galaxy S9starltebaddar9017.1
Samsung Galaxy S9+star2ltebaddar9017.1
Samsung Galaxy Tab A 8.0 (2019)gtowifilifehackerhansol 
Samsung Galaxy Tab S6 Lite (LTE)gta4xlhaggertk, Linux419.1
Samsung Galaxy Tab S6 Lite (Wi-Fi)gta4xlwifiLinux4, haggertk19.1
Sony Xperia XZ2 Compactxz2cdtrunk9019.1
Sony Xperia XZ2 Premiumauroradtrunk9019.1
Sony Xperia XZ2akaridtrunk9019.1
Sony Xperia XZ3akatsukidtrunk9019.1
Walmart onn. TV Box 4K (2021)dopindernpjohnson, bgcngm, stricted, webgeek1234, deadman96385, trautamaki, luca020400, aleasto 
Xiaomi 11 Lite 5G NE / Xiaomi 11 Lite NE 5G / Xiaomi Mi 11 LElisaItsVixano19.1
Xiaomi Mi 10T / Xiaomi Mi 10T Pro / Xiaomi Redmi K30S UltraapollonRamisky, SebaUbuntu19.1
Xiaomi Mi 10T Lite 5G / Xiaomi Mi 10i 5G / Xiaomi Redmi Note 9 Pro 5GgauguinHridaya, Lynnrin19.1
Xiaomi Mi 11 Lite 5GrenoirArianK16a19.1
Xiaomi Mi 11 PromarsFlower Sea 
Xiaomi Mi 11i / Xiaomi Redmi K40 Pro / Xiaomi Redmi K40 Pro+ / Xiaomi Mi 11X ProhaydnAdarshGrewal, erfanoabdi19.1
Xiaomi Mi 9T / Xiaomi Redmi K20 (China) / Xiaomi Redmi K20 (India)davinciArianK16a17.1
Xiaomi Mi A1tissotabhinavgupta37119.1
Xiaomi POCO F2 Pro / Xiaomi Redmi K30 ProlmiSebaUbuntu19.1
Xiaomi POCO F3 / Xiaomi Redmi K40 / Xiaomi Mi 11XaliothSahilSonar, SebaUbuntu, althafvly19.1
Xiaomi POCO M2 Pro / Xiaomi Redmi Note 9S / Xiaomi Redmi Note 9 Pro (Global) / Xiaomi Redmi Note 9 Pro (India) / Xiaomi Redmi Note 9 Pro Max / Xiaomi Redmi Note 10 Litemiatolldereference23, ItsVixano19.1
Xiaomi POCO X3 NFCsuryaShimitar, TheStrechh19.1
Xiaomi POCO X3 ProvayuSebaUbuntu19.1
Xiaomi Redmi 7 / Xiaomi Redmi Y3oncliteDhina1719.1
Xiaomi Redmi 9lancelotsurblazer 
Xiaomi Redmi Note 10 Pro / Xiaomi Redmi Note 10 Pro (India) / Xiaomi Redmi Note 10 Pro Max (India)sweetbasamaryan, danielml3 
Xiaomi Redmi Note 10S / Xiaomi Redmi Note 10S NFC / Xiaomi Redmi Note 10S Latin Americarosemarysurblazer 
Xiaomi Redmi Note 7 Provioletjashvakharia, raghavt2016.0
Xiaomi Redmi Note 9merlinxsurblazer, bengris32 
ZUK Z2 Plusz2_plusDD3Boh19.1

Added 18.1 devices

Device nameWikiMaintainersMoved from
Google Nexus 7 2013 (LTE, Repartitioned)debxnpjohnson, surblazer, Elektroschmock, hpnightowl, ROMSG 
Motorola moto zgriffinerfanoabdi, npjohnson17.1

Source :
https://lineageos.org/Changelog-28/

How to Set Up Google Postmaster Tools

Updated: Jan 31, 2024, 13:03 PM
By Claire Broadley Content Manager
REVIEWED By Jared Atchison Co-owner

Do you want to set up Postmaster Tools… but you’re not sure where to start?

Postmaster Tools lets you to monitor your spam complaints and domain reputation. That’s super important now that Gmail is blocking emails more aggressively.

Thankfully, Postmaster Tools is free and easy to configure. If you’ve already used a Google service like Analytics, it’ll take just a couple of minutes to set up.

In This Article

Who Needs Postmaster Tools?

You should set up Postmaster Tools if you meet any of the following criteria:

1. You Regularly Send Emails to Gmail Recipients

Postmaster Tools is a tool that Google provides to monitor emails to Gmail users.

Realistically, most of your email lists are likely to include a large number of Gmail mailboxes unless you’re sending to a very specific group of people, like an internal company mailing list. (According to Techjury, Gmail had a 75.8% share of the email market in 2023.)

Keep in mind that Gmail recipients aren’t always using Gmail email addresses. The people who use custom domains or Google Workspace are ‘hidden’, so it’s not always clear who’s using Gmail and who isn’t. To be on the safe side, it’s best to use it (it’s free).

2. You Send Marketing Emails (or Have a Large Website)

Postmaster Tools works best for bulk email senders, which Google defines as a domain that sends more than 5,000 emails a day.

If you’re sending email newsletters on a regular basis, having Postmaster Tools is going to help.

Likewise, if you use WooCommerce or a similar platform, you likely send a high number of transactional emails: password reset emails, receipts, and so on.

Reset password email

If you don’t send a large number of emails right now, you can still set up Postmaster Tools so you’re prepared for the time you might.

Just note that you may see the following message:

No data to display at present. Please come back later.
Postmaster Tools requires your domain to satisfy certain conditions before data is visible for this chart.

This usually means you’re not sending enough emails for Google to be able to calculate meaningful statistics.

It’s up to you if you want to set it up anyway, or skip it until your business grows a little more.

How to Add a Domain to Postmaster Tools

Adding a domain to Postmaster Tools is simple and should take less than 10 minutes.

To get started, head to the Postmaster Tools site and log in. If you’re already using Google Analytics, sign in using the email address you use for your Analytics account.

The welcome popup will already be open. Click on Get Started to begin.

Add a domain in Postmaster Tools

Next, enter the domain name that your emails come from.

This should be the domain you use as the sender, or the ‘from email’, when you’re sending emails from your domain. It will normally be your main website.

Enter domain name in Postmaster Tools

If your domain name is already verified for another Google service, that’s all you need to do! You’ll see confirmation that your domain is set up.

Domain added to Google Postmaster Tools

If you haven’t used this domain with Google services before, you’ll need to verify it. Google will ask you to add a TXT record to your DNS.

Postmaster Tools domain verification

To complete this, head to the control panel for the company you bought your domain from. It’ll likely be your domain name registrar or your web host. If you’re using a service like Cloudflare, you’ll want to open up your DNS records there instead.

Locate the part of the control panel that handles your DNS (which might be called a DNS Zone) and add a new TXT record. Copy the record provided into the fields.

Note: Most providers will ask you to enter a Name, which isn’t shown in Google’s instructions. If your provider doesn’t fill this out by default, you can safely enter @ in the Name field.

Verify domain by adding TXT record for Google Postmaster Tools

Now save your record and wait a few minutes. Changes in Cloudflare can be near-instant, but other registrars or hosts may take longer.

After waiting for your change to take effect, switch back to Postmaster Tools and hit Verify to continue.

Verify domain in Postmaster Tools

And that’s it! Now your domain has been added to Postmaster Tools.

Verified domain in Postmaster Tools

How to Read the Charts in Google Postmaster Tools

Google is now tracking various aspects of your email deliverability. It’ll display the data in a series of charts in your account.

Here’s a quick overview of what you can see.

As I mentioned, keep in mind that the data here is only counted from Gmail accounts. It’s not a domain-wide measurement of everything you send.

Spam Rate

Your spam rate is the number of emails sent vs the number of spam complaints received each day. You should aim to keep this below 0.1%.

You can do that by making it easy for people to unsubscribe from marketing emails and using double optins rather than single optins.

Example of a Postmaster Tools report for Gmail recipients

It’s normal for spam complaint rates to spike occasionally because Google measures each day in isolation.

If you’re seeing a spam rate that is consistently above 0.3%, it’s worth looking into why that’s happening. You might be sending emails to people who don’t want to receive them.

IP Reputation

IP reputation is the trustworthiness of the IP address your emails come from. Google may mark emails as spam if your IP reputation is poor.

IP reputation in Postmaster Tool

Keep in mind that IP reputation is tied to your email marketing provider. It’s a measure of their IP as well as yours.

If you see a downward trend, check in with the platform you’re using to ask if they’re seeing the same thing.

Domain Reputation

Domain reputation is the trustworthiness of the domain name you’ve verified in Postmaster Tools. This can be factored into Google’s spam scoring, along with other measurements.

Domain reputation in Postmaster Tools

The ideal scenario is a consistent rating of High, as shown in our screenshot above.

Wait: What is IP Reputation vs Domain Reputation?

You’ll now see that Google has separate options for IP reputation and domain reputation. Here’s the difference:

  • IP reputation measures the reputation of the server that actually sends your emails out. This might be a service like Constant Contact, ConvertKit, or Drip. Other people who use the service will share the same IP, so you’re a little more vulnerable to the impact of other users’ actions.
  • Domain reputation is a measure of the emails that are sent from your domain name as a whole.

Feedback Loop

High-volume or bulk senders can activate this feature to track spam complaints in more detail. You’ll need a special email header called Feedback-ID if you want to use this. Most likely, you won’t need to look at this report.

Authentication

This chart shows you how many emails cleared security checks.

In more technical terms, it shows how many emails attempted to authenticate using DMARC, SPF, and DKIM vs. how many actually did.

Postmaster Tools authentication

Encryption

This chart looks very similar to the domain reputation chart we already showed. It should sit at 100%.

If you’re seeing a lower percentage, you may be using outdated connection details for your email provider.

Check the websites or platforms that are sending emails from your domain and update them from an SSL connection to a TLS connection.

wp mail smtp host and port settings

Delivery Errors

Last but not least, the final chart is the most useful. The Delivery Errors report will show you whether emails were rejected or temporarily delayed. A temporary delay is labeled as a TempFail in this report.

This chart is going to tell you whether Gmail is blocking your emails, and if so, why.

If you see any jumps, click on the point in the chart and the reason for the failures will be displayed below it.

Delivery errors in Postmaster Tools

Small jumps here and there are not a huge cause for concern. However, very large error rates are a definite red flag. You may have received a 550 error or a 421 error that gives you more clues as to why they’re happening.

Here are the 3 most important error messages related to blocked emails in Gmail:

421-4.7.0 unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been temporarily rate limited.

550-5.7.1 Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been blocked.

550-5.7.26 This mail is unauthenticated, which poses a security risk to the sender and Gmail users, and has been blocked. The sender must authenticate with at least one of SPF or DKIM. For this message, DKIM checks did not pass and SPF check for example.com did not pass with ip: 192.186.0.1.

If you’re seeing these errors, check that your domain name has the correct DNS records for authenticating email. It’s also a good idea to examine your emails to ensure you have the right unsubscribe links in them.

Note: WP Mail SMTP preserves the list-unsubscribe headers that your email provider adds. That means that your emails will have a one-click unsubscribe option at the top.

One click unsubscribe link

If you’re using a different SMTP plugin, make sure it’s preserving that crucial list-unsubscribe header. If it’s not there, If not, you may want to consider switching to WP Mail SMTP for the best possible protection against spam complaints and failed emails.

Fix Your WordPress Emails Now

Next, Authenticate Emails from WordPress

Are your emails from WordPress disappearing or landing in the spam folder? You’re definitely not alone. Learn how to authenticate WordPress emails and ensure they always land in your inbox.

Ready to fix your emails? Get started today with the best WordPress SMTP plugin. If you don’t have the time to fix your emails, you can get full White Glove Setup assistance as an extra purchase, and there’s a 14-day money-back guarantee for all paid plans.

If this article helped you out, please follow us on Facebook and Twitter for more WordPress tips and tutorials.

Source :
https://wpmailsmtp.com/how-to-set-up-google-postmaster-tools/

Reflecting on the GDPR to celebrate Privacy Day 2024

26/01/2024
Emily Hancock

10 min read

This post is also available in DeutschFrançais日本語 and Nederlands.

Reflecting on the GDPR to celebrate Privacy Day 2024

Just in time for Data Privacy Day 2024 on January 28, the EU Commission is calling for evidence to understand how the EU’s General Data Protection Regulation (GDPR) has been functioning now that we’re nearing the 6th anniversary of the regulation coming into force.

We’re so glad they asked, because we have some thoughts. And what better way to celebrate privacy day than by discussing whether the application of the GDPR has actually done anything to improve people’s privacy?

The answer is, mostly yes, but in a couple of significant ways – no.

Overall, the GDPR is rightly seen as the global gold standard for privacy protection. It has served as a model for what data protection practices should look like globally, it enshrines data subject rights that have been copied across jurisdictions, and when it took effect, it created a standard for the kinds of privacy protections people worldwide should be able to expect and demand from the entities that handle their personal data. On balance, the GDPR has definitely moved the needle in the right direction for giving people more control over their personal data and in protecting their privacy.

In a couple of key areas, however, we believe the way the GDPR has been applied to data flowing across the Internet has done nothing for privacy and in fact may even jeopardize the protection of personal data. The first area where we see this is with respect to cross-border data transfers. Location has become a proxy for privacy in the minds of many EU data protection regulators, and we think that is the wrong result. The second area is an overly broad interpretation of what constitutes “personal data” by some regulators with respect to Internet Protocol or “IP” addresses. We contend that IP addresses should not always count as personal data, especially when the entities handling IP addresses have no ability on their own to tie those IP addresses to individuals. This is important because the ability to implement a number of industry-leading cybersecurity measures relies on the ability to do threat intelligence on Internet traffic metadata, including IP addresses.  

Location should not be a proxy for privacy

Fundamentally, good data security and privacy practices should be able to protect personal data regardless of where that processing or storage occurs. Nevertheless, the GDPR is based on the idea that legal protections should attach to personal data based on the location of the data – where it is generated, processed, or stored. Articles 44 to 49 establish the conditions that must be in place in order for data to be transferred to a jurisdiction outside the EU, with the idea that even if the data is in a different location, the privacy protections established by the GDPR should follow the data. No doubt this approach was influenced by political developments around government surveillance practices, such as the revelations in 2013 of secret documents describing the relationship between the US NSA (and its Five Eyes partners) and large Internet companies, and that intelligence agencies were scooping up data from choke points on the Internet. And once the GDPR took effect, many data regulators in the EU were of the view that as a result of the GDPR’s restrictions on cross-border data transfers, European personal data simply could not be processed in the United States in a way that would be consistent with the GDPR.

This issue came to a head in July 2020, when the European Court of Justice (CJEU), in its “Schrems II” decision1, invalidated the EU-US Privacy Shield adequacy standard and questioned the suitability of the EU standard contractual clauses (a mechanism entities can use to ensure that GDPR protections are applied to EU personal data even if it is processed outside the EU). The ruling in some respects left data protection regulators with little room to maneuver on questions of transatlantic data flows. But while some regulators were able to view the Schrems II ruling in a way that would still allow for EU personal data to be processed in the United States, other data protection regulators saw the decision as an opportunity to double down on their view that EU personal data cannot be processed in the US consistent with the GDPR, therefore promoting the misconception that data localization should be a proxy for data protection.

In fact, we would argue that the opposite is the case. From our own experience and according to recent research2, we know that data localization threatens an organization’s ability to achieve integrated management of cybersecurity risk and limits an entity’s ability to employ state-of-the-art cybersecurity measures that rely on cross-border data transfers to make them as effective as possible. For example, Cloudflare’s Bot Management product only increases in accuracy with continued use on the global network: it detects and blocks traffic coming from likely bots before feeding back learnings to the models backing the product. A diversity of signal and scale of data on a global platform is critical to help us continue to evolve our bot detection tools. If the Internet were fragmented – preventing data from one jurisdiction being used in another – more and more signals would be missed. We wouldn’t be able to apply learnings from bot trends in Asia to bot mitigation efforts in Europe, for example. And if the ability to identify bot traffic is hampered, so is the ability to block those harmful bots from services that process personal data.

The need for industry-leading cybersecurity measures is self-evident, and it is not as if data protection authorities don’t realize this. If you look at any enforcement action brought against an entity that suffered a data breach, you see data protection regulators insisting that the impacted entities implement ever more robust cybersecurity measures in line with the obligation GDPR Article 32 places on data controllers and processors to “develop appropriate technical and organizational measures to ensure a level of security appropriate to the risk”, “taking into account the state of the art”. In addition, data localization undermines information sharing within industry and with government agencies for cybersecurity purposes, which is generally recognized as vital to effective cybersecurity.

In this way, while the GDPR itself lays out a solid framework for securing personal data to ensure its privacy, the application of the GDPR’s cross-border data transfer provisions has twisted and contorted the purpose of the GDPR. It’s a classic example of not being able to see the forest for the trees. If the GDPR is applied in such a way as to elevate the priority of data localization over the priority of keeping data private and secure, then the protection of ordinary people’s data suffers.

Applying data transfer rules to IP addresses could lead to balkanization of the Internet

The other key way in which the application of the GDPR has been detrimental to the actual privacy of personal data is related to the way the term “personal data” has been defined in the Internet context – specifically with respect to Internet Protocol or “IP” addresses. A world where IP addresses are always treated as personal data and therefore subject to the GDPR’s data transfer rules is a world that could come perilously close to requiring a walled-off European Internet. And as noted above, this could have serious consequences for data privacy, not to mention that it likely would cut the EU off from any number of global marketplaces, information exchanges, and social media platforms.

This is a bit of a complicated argument, so let’s break it down. As most of us know, IP addresses are the addressing system for the Internet. When you send a request to a website, send an email, or communicate online in any way, IP addresses connect your request to the destination you’re trying to access. These IP addresses are the key to making sure Internet traffic gets delivered to where it needs to go. As the Internet is a global network, this means it’s entirely possible that Internet traffic – which necessarily contains IP addresses – will cross national borders. Indeed, the destination you are trying to access may well be located in a different jurisdiction altogether. That’s just the way the global Internet works. So far, so good.

But if IP addresses are considered personal data, then they are subject to data transfer restrictions under the GDPR. And with the way those provisions have been applied in recent years, some data regulators were getting perilously close to saying that IP addresses cannot transit jurisdictional boundaries if it meant the data might go to the US. The EU’s recent approval of the EU-US Data Privacy Framework established adequacy for US entities that certify to the framework, so these cross-border data transfers are not currently an issue. But if the Data Privacy Framework were to be invalidated as the EU-US Privacy Shield was in the Schrems II decision, then we could find ourselves in a place where the GDPR is applied to mean that IP addresses ostensibly linked to EU residents can’t be processed in the US, or potentially not even leave the EU.

If this were the case, then providers would have to start developing Europe-only networks to ensure IP addresses never cross jurisdictional boundaries. But how would people in the EU and US communicate if EU IP addresses can’t go to the US? Would EU citizens be restricted from accessing content stored in the US? It’s an application of the GDPR that would lead to the absurd result – one surely not intended by its drafters. And yet, in light of the Schrems II case and the way the GDPR has been applied, here we are.

A possible solution would be to consider that IP addresses are not always “personal data” subject to the GDPR. In 2016 – even before the GDPR took effect – the Court of Justice of the European Union (CJEU) established the view in Breyer v. Bundesrepublik Deutschland that even dynamic IP addresses, which change with every new connection to the Internet, constituted personal data if an entity processing the IP address could link the IP addresses to an individual. While the court’s decision did not say that dynamic IP addresses are always personal data under European data protection law, that’s exactly what EU data regulators took from the decision, without considering whether an entity actually has a way to tie the IP address to a real person3.

The question of when an identifier qualifies as “personal data” is again before the CJEU: In April 2023, the lower EU General Court ruled in SRB v EDPS4 that transmitted data can be considered anonymised and therefore not personal data if the data recipient does not have any additional information reasonably likely to allow it to re-identify the data subjects and has no legal means available to access such information. The appellant – the European Data Protection Supervisor (EDPS) – disagrees. The EDPS, who mainly oversees the privacy compliance of EU institutions and bodies, is appealing the decision and arguing that a unique identifier should qualify as personal data if that identifier could ever be linked to an individual, regardless of whether the entity holding the identifier actually had the means to make such a link.

If the lower court’s common-sense ruling holds, one could argue that IP addresses are not personal data when those IP addresses are processed by entities like Cloudflare, which have no means of connecting an IP address to an individual. If IP addresses are then not always personal data, then IP addresses will not always be subject to the GDPR’s rules on cross-border data transfers.

Although it may seem counterintuitive, having a standard whereby an IP address is not necessarily “personal data” would actually be a positive development for privacy. If IP addresses can flow freely across the Internet, then entities in the EU can use non-EU cybersecurity providers to help them secure their personal data. Advanced Machine Learning/predictive AI techniques that look at IP addresses to protect against DDoS attacks, prevent bots, or otherwise guard against personal data breaches will be able to draw on attack patterns and threat intelligence from around the world to the benefit of EU entities and residents. But none of these benefits can be realized in a world where IP addresses are always personal data under the GDPR and where the GDPR’s data transfer rules are interpreted to mean IP addresses linked to EU residents can never flow to the United States.

Keeping privacy in focus

On this Data Privacy Day, we urge EU policy makers to look closely at how the GDPR is working in practice, and to take note of the instances where the GDPR is applied in ways that place privacy protections above all other considerations – even appropriate security measures mandated by the GDPR’s Article 32 that take into account the state of the art of technology. When this happens, it can actually be detrimental to privacy. If taken to the extreme, this formulaic approach would not only negatively impact cybersecurity and data protection, but even put into question the functioning of the global Internet infrastructure as a whole, which depends on cross-border data flows. So what can be done to avert this?

First, we believe EU policymakers could adopt guidelines (if not legal clarification) for regulators that IP addresses should not be considered personal data when they cannot be linked by an entity to a real person. Second, policymakers should clarify that the GDPR’s application should be considered with the cybersecurity benefits of data processing in mind. Building on the GDPR’s existing recital 49, which rightly recognizes cybersecurity as a legitimate interest for processing, personal data that needs to be processed outside the EU for cybersecurity purposes should be exempted from GDPR restrictions to international data transfers. This would avoid some of the worst effects of the mindset that currently views data localization as a proxy for data privacy. Such a shift would be a truly pro-privacy application of the GDPR.

1 Case C-311/18, Data Protection Commissioner v Facebook Ireland and Maximillian Schrems.
2 Swire, Peter and Kennedy-Mayo, DeBrae and Bagley, Andrew and Modak, Avani and Krasser, Sven and Bausewein, Christoph, Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures (2023).
3 Different decisions by the European data protection authorities, namely the Austrian DSB (December 2021), the French CNIL (February 2022) and the Italian Garante (June 2022), while analyzing the use of Google Analytics, have rejected the relative approach used by the Breyer case and considered that an IP address should always be considered as personal data. Only the decision issued by the Spanish AEPD (December 2022) followed the same interpretation of the Breyer case. In addition, see paragraphs 109 and 136 in Guidelines by Supervisory Authorities for Tele-Media Providers, DSK (2021).
4 Single Resolution Board v EDPS, Court of Justice of the European Union, April 2023.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/reflecting-on-the-gdpr-to-celebrate-privacy-day-2024/

Thanksgiving 2023 security incident

01/02/2024
Matthew Prince John Graham-Cumming Grant Bourzikas

11 min read

On Thanksgiving Day, November 23, 2023, Cloudflare detected a threat actor on our self-hosted Atlassian server. Our security team immediately began an investigation, cut off the threat actor’s access, and on Sunday, November 26, we brought in CrowdStrike’s Forensic team to perform their own independent analysis.

Yesterday, CrowdStrike completed its investigation, and we are publishing this blog post to talk about the details of this security incident.

We want to emphasize to our customers that no Cloudflare customer data or systems were impacted by this event. Because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools, the threat actor’s ability to move laterally was limited. No services were implicated, and no changes were made to our global network systems or configuration. This is the promise of a Zero Trust architecture: it’s like bulkheads in a ship where a compromise in one system is limited from compromising the whole organization.

From November 14 to 17, a threat actor did reconnaissance and then accessed our internal wiki (which uses Atlassian Confluence) and our bug database (Atlassian Jira). On November 20 and 21, we saw additional access indicating they may have come back to test access to ensure they had connectivity.

They then returned on November 22 and established persistent access to our Atlassian server using ScriptRunner for Jira, gained access to our source code management system (which uses Atlassian Bitbucket), and tried, unsuccessfully, to access a console server that had access to the data center that Cloudflare had not yet put into production in São Paulo, Brazil.

They did this by using one access token and three service account credentials that had been taken, and that we failed to rotate, after the Okta compromise of October 2023. All threat actor access and connections were terminated on November 24 and CrowdStrike has confirmed that the last evidence of threat activity was on November 24 at 10:44.

(Throughout this blog post all dates and times are UTC.)

Even though we understand the operational impact of the incident to be extremely limited, we took this incident very seriously because a threat actor had used stolen credentials to get access to our Atlassian server and accessed some documentation and a limited amount of source code. Based on our collaboration with colleagues in the industry and government, we believe that this attack was performed by a nation state attacker with the goal of obtaining persistent and widespread access to Cloudflare’s global network.

“Code Red” Remediation and Hardening Effort

On November 24, after the threat actor was removed from our environment, our security team pulled in all the people they needed across the company to investigate the intrusion and ensure that the threat actor had been completely denied access to our systems, and to ensure we understood the full extent of what they accessed or tried to access.

Then, from November 27, we redirected the efforts of a large part of the Cloudflare technical staff (inside and outside the security team) to work on a single project dubbed “Code Red”. The focus was strengthening, validating, and remediating any control in our environment to ensure we are secure against future intrusion and to validate that the threat actor could not gain access to our environment. Additionally, we continued to investigate every system, account and log to make sure the threat actor did not have persistent access and that we fully understood what systems they had touched and which they had attempted to access.

CrowdStrike performed an independent assessment of the scope and extent of the threat actor’s activity, including a search for any evidence that they still persisted in our systems. CrowdStrike’s investigation provided helpful corroboration and support for our investigation, but did not bring to light any activities that we had missed. This blog post outlines in detail everything we and CrowdStrike uncovered about the activity of the threat actor.

The only production systems the threat actor could access using the stolen credentials was our Atlassian environment. Analyzing the wiki pages they accessed, bug database issues, and source code repositories, it appears they were looking for information about the architecture, security, and management of our global network; no doubt with an eye on gaining a deeper foothold. Because of that, we decided a huge effort was needed to further harden our security protocols to prevent the threat actor from being able to get that foothold had we overlooked something from our log files.

Our aim was to prevent the attacker from using the technical information about the operations of our network as a way to get back in. Even though we believed, and later confirmed, the attacker had limited access, we undertook a comprehensive effort to rotate every production credential (more than 5,000 individual credentials), physically segment test and staging systems, performed forensic triages on 4,893 systems, reimaged and rebooted every machine in our global network including all the systems the threat actor accessed and all Atlassian products (Jira, Confluence, and Bitbucket).

The threat actor also attempted to access a console server in our new, and not yet in production, data center in São Paulo. All attempts to gain access were unsuccessful. To ensure these systems are 100% secure, equipment in the Brazil data center was returned to the manufacturers. The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.

We also looked for software packages that hadn’t been updated, user accounts that might have been created, and unused active employee accounts; we went searching for secrets that might have been left in Jira tickets or source code, examined and deleted all HAR files uploaded to the wiki in case they contained tokens of any sort. Whenever in doubt, we assumed the worst and made changes to ensure anything the threat actor was able to access would no longer be in use and therefore no longer be valuable to them.

Every member of the team was encouraged to point out areas the threat actor might have touched, so we could examine log files and determine the extent of the threat actor’s access. By including such a large number of people across the company, we aimed to leave no stone unturned looking for evidence of access or changes that needed to be made to improve security.

The immediate “Code Red” effort ended on January 5, but work continues across the company around credential management, software hardening, vulnerability management, additional alerting, and more.

Attack timeline

The attack started in October with the compromise of Okta, but the threat actor only began targeting our systems using those credentials from the Okta compromise in mid-November.

The following timeline shows the major events:

October 18 – Okta compromise

We’ve written about this before but, in summary, we were (for the second time) the victim of a compromise of Okta’s systems which resulted in a threat actor gaining access to a set of credentials. These credentials were meant to all be rotated.

Unfortunately, we failed to rotate one service token and three service accounts (out of thousands) of credentials that were leaked during the Okta compromise.

One was a Moveworks service token that granted remote access into our Atlassian system. The second credential was a service account used by the SaaS-based Smartsheet application that had administrative access to our Atlassian Jira instance, the third account was a Bitbucket service account which was used to access our source code management system, and the fourth was an AWS environment that had no access to the global network and no customer or sensitive data.

The one service token and three accounts were not rotated because mistakenly it was believed they were unused. This was incorrect and was how the threat actor first got into our systems and gained persistence to our Atlassian products. Note that this was in no way an error on the part of Atlassian, AWS, Moveworks or Smartsheet. These were merely credentials which we failed to rotate.

November 14 09:22:49 – threat actor starts probing

Our logs show that the threat actor started probing and performing reconnaissance of our systems beginning on November 14, looking for a way to use the credentials and what systems were accessible. They attempted to log into our Okta instance and were denied access. They attempted access to the Cloudflare Dashboard and were denied access.

Additionally, the threat actor accessed an AWS environment that is used to power the Cloudflare Apps marketplace. This environment was segmented with no access to global network or customer data. The service account to access this environment was revoked, and we validated the integrity of the environment.

November 15 16:28:38 – threat actor gains access to Atlassian services

The threat actor successfully accessed Atlassian Jira and Confluence on November 15 using the Moveworks service token to authenticate through our gateway, and then they used the Smartsheet service account to gain access to the Atlassian suite. The next day they began looking for information about the configuration and management of our global network, and accessed various Jira tickets.

The threat actor searched the wiki for things like remote access, secret, client-secret, openconnect, cloudflared, and token. They accessed 36 Jira tickets (out of a total of 2,059,357 tickets) and 202 wiki pages (out of a total of 194,100 pages).

The threat actor accessed Jira tickets about vulnerability management, secret rotation, MFA bypass, network access, and even our response to the Okta incident itself.

The wiki searches and pages accessed suggest the threat actor was very interested in all aspects of access to our systems: password resets, remote access, configuration, our use of Salt, but they did not target customer data or customer configurations.

November 16 14:36:37 – threat actor creates an Atlassian user account

The threat actor used the Smartsheet credential to create an Atlassian account that looked like a normal Cloudflare user. They added this user to a number of groups within Atlassian so that they’d have persistent access to the Atlassian environment should the Smartsheet service account be removed.

November 17 14:33:52 to November 20 09:26:53 – threat actor takes a break from accessing Cloudflare systems

During this period, the attacker took a break from accessing our systems (apart from apparently briefly testing that they still had access) and returned just before Thanksgiving.

November 22 14:18:22 – threat actor gains persistence

Since the Smartsheet service account had administrative access to Atlassian Jira, the threat actor was able to install the Sliver Adversary Emulation Framework, which is a widely used tool and framework that red teams and attackers use to enable “C2” (command and control), connectivity gaining persistent and stealthy access to a computer on which it is installed. Sliver was installed using the ScriptRunner for Jira plugin.

This allowed them continuous access to the Atlassian server, and they used this to attempt lateral movement. With this access the Threat Actor attempted to gain access to a non-production console server in our São Paulo, Brazil data center due to a non-enforced ACL. The access was denied, and they were not able to access any of the global network.

Over the next day, the threat actor viewed 120 code repositories (out of a total of 11,904 repositories). Of the 120, the threat actor used the Atlassian Bitbucket git archive feature on 76 repositories to download them to the Atlassian server, and even though we were not able to confirm whether or not they had been exfiltrated, we decided to treat them as having been exfiltrated.

The 76 source code repositories were almost all related to how backups work, how the global network is configured and managed, how identity works at Cloudflare, remote access, and our use of Terraform and Kubernetes. A small number of the repositories contained encrypted secrets which were rotated immediately even though they were strongly encrypted themselves.

We focused particularly on these 76 source code repositories to look for embedded secrets, (secrets stored in the code were rotated), vulnerabilities and ways in which an attacker could use them to mount a subsequent attack. This work was done as a priority by engineering teams across the company as part of “Code Red”.

As a SaaS company, we’ve long believed that our source code itself is not as precious as the source code of software companies that distribute software to end users. In fact, we’ve open sourced a large amount of our source code and speak openly through our blog about algorithms and techniques we use. So our focus was not on someone having access to the source code, but whether that source code contained embedded secrets (such as a key or token) and vulnerabilities.

November 23 – Discovery and threat actor access termination begins

Our security team was alerted to the threat actor’s presence at 16:00 and deactivated the Smartsheet service account 35 minutes later. 48 minutes later the user account created by the threat actor was found and deactivated. Here’s the detailed timeline for the major actions taken to block the threat actor once the first alert was raised.

15:58 – The threat actor adds the Smartsheet service account to an administrator group.
16:00 – Automated alert about the change at 15:58 to our security team.
16:12 – Cloudflare SOC starts investigating the alert.
16:35 – Smartsheet service account deactivated by Cloudflare SOC.
17:23 – The threat actor-created Atlassian user account is found and deactivated.
17:43 – Internal Cloudflare incident declared.
21:31 – Firewall rules put in place to block the threat actor’s known IP addresses.

November 24 – Sliver removed; all threat actor access terminated

10:44 – Last known threat actor activity.
11:59 – Sliver removed.

Throughout this timeline, the threat actor tried to access a myriad of other systems at Cloudflare but failed because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools.

To be clear, we saw no evidence whatsoever that the threat actor got access to our global network, data centers, SSL keys, customer databases or configuration information, Cloudflare Workers deployed by us or customers, AI models, network infrastructure, or any of our datastores like Workers KV, R2 or Quicksilver. Their access was limited to the Atlassian suite and the server on which our Atlassian runs.

A large part of our “Code Red” effort was understanding what the threat actor got access to and what they tried to access. By looking at logging across systems we were able to track attempted access to our internal metrics, network configuration, build system, alerting systems, and release management system. Based on our review, none of their attempts to access these systems were successful. Independently, CrowdStrike performed an assessment of the scope and extent of the threat actor’s activity, which did not bring to light activities that we had missed and concluded that the last evidence of threat activity was on November 24 at 10:44.

We are confident that between our investigation and CrowdStrike’s, we fully understand the threat actor’s actions and that they were limited to the systems on which we saw their activity.

Conclusion

This was a security incident involving a sophisticated actor, likely a nation-state, who operated in a thoughtful and methodical manner. The efforts we have taken ensure that the ongoing impact of the incident was limited and that we are well-prepared to fend off any sophisticated attacks in the future. This required the efforts of a significant number of Cloudflare’s engineering staff, and, for over a month, this was the highest priority at Cloudflare. The entire Cloudflare team worked to ensure that our systems were secure, the threat actor’s access was understood, to remediate immediate priorities (such as mass credential rotation), and to build a plan of long-running work to improve our overall security based on areas for improvement discovered during this process.

We are incredibly grateful to everyone at Cloudflare who responded quickly over the Thanksgiving holiday to conduct an initial analysis and lock out the threat actor, and all those who contributed to this effort. It would be impossible to name everyone involved, but their long hours and dedicated work made it possible to undertake an essential review and change of Cloudflare’s security while keeping our global network running and our customers’ service running.

We are grateful to CrowdStrike for having been available immediately to conduct an independent assessment. Now that their final report is complete, we are confident in our internal analysis and remediation of the intrusion and are making this blog post available.

IOCs
Below are the Indications of Compromise (IOCs) that we saw from this threat actor. We are publishing them so that other organizations, and especially those that may have been impacted by the Okta breach, can search their logs to confirm the same threat actor did not access their systems.

IndicatorIndicator TypeSHA256Description
193.142.58[.]126IPv4N/APrimary threat actor
Infrastructure, owned by
M247 Europe SRL (Bucharest,
Romania)
198.244.174[.]214IPv4N/ASliver C2 server, owned by
OVH SAS (London, England)
idowall[.]comDomainN/AInfrastructure serving Sliver
payload
jvm-agentFilenamebdd1a085d651082ad567b03e5186d1d4
6d822bb7794157ab8cce95d850a3caaf
Sliver payload

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/thanksgiving-2023-security-incident

AnyDesk says hackers breached its production servers, reset passwords

By Lawrence Abrams
February 2, 2024

AnyDesk confirmed today that it suffered a recent cyberattack that allowed hackers to gain access to the company’s production systems. BleepingComputer has learned that source code and private code signing keys were stolen during the attack.

AnyDesk is a remote access solution that allows users to remotely access computers over a network or the internet. The program is very popular with the enterprise, which use it for remote support or to access colocated servers.

The software is also popular among threat actors who use it for persistent access to breached devices and networks.

The company reports having 170,000 customers, including 7-Eleven, Comcast, Samsung, MIT, NVIDIA, SIEMENS, and the United Nations.

AnyDesk hacked

In a statement shared with BleepingComputer late Friday afternoon, AnyDesk says they first learned of the attack after detecting indications of an incident on their production servers. 

After conducting a security audit, they determined their systems were compromised and activated a response plan with the help of cybersecurity firm CrowdStrike.

AnyDesk did not share details on whether data was stolen during the attack. However, BleepingComputer has learned that the threat actors stole source code and code signing certificates.

The company also confirmed ransomware was not involved but didn’t share too much information about the attack other than saying their servers were breached, with the advisory mainly focusing on how they responded to the incident.

As part of their response, AnyDesk says they have revoked security-related certificates and remediated or replaced systems as necessary. They also reassured customers that AnyDesk was safe to use and that there was no evidence of end-user devices being affected by the incident.

“We can confirm that the situation is under control and it is safe to use AnyDesk. Please ensure that you are using the latest version, with the new code signing certificate,” AnyDesk said in a public statement.

While the company says that no authentication tokens were stolen, out of caution, AnyDesk is revoking all passwords to their web portal and suggests changing the password if it’s used on other sites.

“AnyDesk is designed in a way which session authentication tokens cannot be stolen. They only exist on the end user’s device and are associated with the device fingerprint. These tokens never touch our systems, “AnyDesk told BleepingComputer in response to our questions about the attack.

“We have no indication of session hijacking as to our knowledge this is not possible.”

The company has already begun replacing stolen code signing certificates, with Günter Born of BornCity first reporting that they are using a new certificate in AnyDesk version 8.0.8, released on January 29th. The only listed change in the new version is that the company switched to a new code signing certificate and will revoke the old one soon.

BleepingComputer looked at previous versions of the software, and the older executables were signed under the name ‘philandro Software GmbH’ with serial number 0dbf152deaf0b981a8a938d53f769db8. The new version is now signed under ‘AnyDesk Software GmbH,’ with a serial number of 0a8177fcd8936a91b5e0eddf995b0ba5, as shown below.

Signed AnyDesk 8.0.6 (left) vs AnyDesk 8.0.8 (right)
Signed AnyDesk 8.0.6 (left) vs AnyDesk 8.0.8 (right)
Source: BleepingComputer

Certificates are usually not invalidated unless they have been compromised, such as being stolen in attacks or publicly exposed.

While AnyDesk had not shared when the breach occurred, Born reported that AnyDesk suffered a four-day outage starting on January 29th, during which the company disabled the ability to log in to the AnyDesk client.

“my.anydesk II is currently undergoing maintenance, which is expected to last for the next 48 hours or less,” reads the AnyDesk status message page.

“You can still access and use your account normally. Logging in to the AnyDesk client will be restored once the maintenance is complete.”

Yesterday, access was restored, allowing users to log in to their accounts, but AnyDesk did not provide any reason for the maintenance in the status updates.

However, AnyDesk has confirmed to BleepingComputer that this maintenance is related to the cybersecurity incident.

It is strongly recommended that all users switch to the new version of the software, as the old code signing certificate will soon be revoked.

Furthermore, while AnyDesk says that passwords were not stolen in the attack, the threat actors did gain access to production systems, so it is strongly advised that all AnyDesk users change their passwords. Furthermore, if they use their AnyDesk password at other sites, they should be changed there as well.

Every week, it feels like we learn of a new breach against well-known companies.

Last night, Cloudflare disclosed that they were hacked on Thanksgiving using authentication keys stolen during last years Okta cyberattack.

Last week, Microsoft also revealed that they were hacked by Russian state-sponsored hackers named Midnight Blizzard, who also attacked HPE in May.

Related Articles:

GTA 5 source code reportedly leaked online a year after Rockstar hack

Lurie Children’s Hospital took systems offline after cyberattack

Johnson Controls says ransomware attack cost $27 million, data stolen

A mishandled GitHub token exposed Mercedes-Benz source code

How SMBs can lower their risk of cyberattacks and data breaches

Source :
https://www.bleepingcomputer.com/news/security/anydesk-says-hackers-breached-its-production-servers-reset-passwords/

Does the WiFi channel matter? A guide to which channel you should choose.

SEPTEMBER 20, 2022 BY MARK B

When having trouble getting a good performance from your wireless router or access point, the first settings that people usually change is the WiFi channel. And it makes sense considering that it may be just a bit ‘too crowded’, so change the number, save and the WiFi speed should come back to life, right?

It is possible to see an increase in throughput, but you should never change the settings blindly, hoping that something may stick. I admit that I am guilty of doing just that some time ago, but the concept behind the WiFi channels doesn’t need to be mystifying. So let’s have a look at what they are, their relationship with the channel bandwidth and which should be the suitable settings for your network.

Table of Contents

What is a WiFi channel?

I am sure that most of you are familiar with the 2.4GHz and the 5GHz radio band, but you need to understand that they’re not some fixed frequency points, instead, they’re more like a spectrum of frequencies. The 2.4GHz has a range of frequencies from 2,402MHz to 2,483MHz and, when you tune to a specific frequency within this spectrum, you essentially are selecting a WiFi channel for your data transmission.

wifi-channels
2.4GHz Channels – 20MHz channel bandwidth.

For example, the channel 1 is associated with the 2,412MHz (the range is between 2,401 to 2,423MHz), the channel two is 2,417MHz (2,406 to 2,428MHz range), channel 7 is 2,442MHz (2,431 to 2,453MHz range) and the channel 14 is 2,484MHz (2,473 to 2,495MHz range). As you can see, there is some overlapping in the frequency range between certain channels, but we’ll talk more about it in a minute. The range of 5GHz radio band spans between 5.035MHz and 5.980MHz.

This means that the channel 36 is associated with the 5,180MHz (the range between 5,170 and 5,190MHz), the channel 40 is 5,200MHz (between 5,190 and 5,210MHz) and channel 44 can be associated with the 5,220MHz frequency (the range between 5,210 and 5,230MHz). Now, let’s talk about overlapping and non-overlapping channels.

Overlapping vs non-overlapping channels

If you had a look at the channel representation that I put together for the 2.4GHz frequency band using the 20MHz WiFi channel bandwidth, you can see that three channels are different from the others. The channels 1, 6 and 11 are non-overlapping and you can see from the graph that if your APs are using these channels, then they’re far less prone to interference.

channel-bandwidth-5ghz
5GHz – Channel allocation.

To get an even better idea is to have a look at the graph representing the 5GHz channels and the way they’re grouped to create a larger channel bandwidth. We have talked about the two main types of interference, the co-channel and the adjacent channel interference when we analyzed the best channel bandwidth to use for the 5GHz band. And the idea is that when using the same channel, the devices will be forced to take turns, therefore slowing down the network.

But it’s also possible that the adjacent channels may bleed into each other, adding noise to the data, rendering the WiFi connection unusable. That’s why most people suggest to keep a less wide channel bandwidth and use non-overlapping channels if there are lots of APs in the area (which are not properly adjusted by a system admin).

Changing the channel, but not the channel bandwidth

We already know that changing the channel bandwidth will have a significant impact on the WiFi performance because 20MHz or 40MHz will deliver a far more stable throughput on the 5GHz frequency band (although not that high) in a crowded environment.

fs-s3150-8t2fp-switch-aps
Multiple wireless access points.

But what happens when we change the WiFi channel, while keeping the same channel bandwidth? Again, it depends if you’re switching from overlapping to non-overlapping channels because doing so, you may see a noticeable increase in performance (just keep an eye on the available channels because the wider the channel bandwidth, the less the non-overlapping channels will be available for you to use). Now, in the ideal scenario, where there is no interference, when moving from one channel to the other within the same bandwidth shouldn’t really make that much of a difference in terms of data transfer rate.

Auto or manual WiFi channel selection?

The wireless routers and access points usually have the WiFi channel selection set to auto, which means that you may see that your neighbors change theirs annoyingly often. That’s because every time they restart the router/AP or there’s a power outage, the channel may be changed, so that it’s the least crowded available.

engenius-ecs2512fp-aps
Abundance of Wireless Access Points.

If you choose yours manually, you will have to keep up with the changes to your neighboring WiFi networks, which is why it’s a good idea to keep the WiFi channel on your AP on auto as well. If we’re talking about an office or some large enterprise network, it’s obviously better to have full control on how the network behaves, so the manual selection is better.

When you should use DFS channels?

DFS stands for Dynamic Frequency Selection and it refers to those frequencies that are usually limited for military use or for radars (such as weather devices or airport equipment), which means that they can differ from country to country. So make sure to check whether you’re allowed to use certain channels (especially if you got the wireless router or AP from abroad), before you get a knock on your door. Also, it’s pretty much obvious that you won’t be able to use these channels if you live near an airport.

engenius-ews850ap-outdoor-access-point
Engenius EWS850AP access point.

That being said, the main benefit to using DFS channels is that you are no longer impacted by interference from your neighbors WiFi. But do be aware that, depending on the router, there is a high chance that in case it detects a near-by radar using the same frequency, then it will switch to another WiFi channel automatically.

Also, there is another problem that I have often encountered. Not that many client devices will actually connect to a WiFi network that uses DFS channels, so you may find out that while your PC and smartphone continue to have access to the Internet, pretty much every other smart or IoT device will drop the connection.

Source :
https://www.mbreviews.com/does-the-wifi-channel-matter/

Do WiFi 6 routers have better range?

OCTOBER 15, 2022 BY MARK B

I do get the question of whether the WiFi 6 routers have better range from time to time and my answer is that some do have a better range than the WiFi 5 router, while some don’t. It’s only normal that an expensive new piece of technology will behave better than an old, battle-scarred router. But, in general, are the WiFi 6 routers able to cover more space than the devices from the older WiFi generation?

Especially since we are promised that the OFDMA will just make everything way better, so just go and buy the new stuff, throw away the old! The idea behind the WiFi 6 standard (IEEE 802.11ax) was not really about speed or increased coverage, it was about handling a denser network, with a lot of very diverse client devices in an environment prone to lots of interference.

engenius-ecs2512fp-aps
Abundance of Wireless Access Points.

As a consequence, you may see some benefits in regard to coverage and throughput, despite not really being the main aim. It’s clear that those that stand to get the most benefit are SMBs and especially the enterprise market, so why do Asus, Netgear, TP-Link and other home-network-based manufacturers keep on pushing WiFi 6 routers forward? The tempting response is money, which is true, but only partially.

We have started to get more denser networks even in our homes (smart and IoT devices) and living in a city means your neighbors will also add to the creation of denser networks, so WiFi 6 could make sense, right? With the correct client devices, yes and you may also see a better range. So, let’s do a slightly deeper dive into the subject and understand whether WiFi 6 routers have a better range in real-life conditions.

Table of Contents

What determines the range of a router?

The main factors that can determine the range of a router can be considered the transmit power, the antenna gain and the interference in the area where the signal needs to travel. The SoC will also play an important role on the WiFi performance of the router.

1. The Transmit Power

I have covered this topic a bit in a separate article, where I discussed whether the user should adjust the transmit power to their access point or leave the default values. And the conclusion was that the default values are usually wrong and yes, you should adjust them in a manner as to get a more efficient network, even if it may seem that the coverage will suffer. But before that know that there are legal limitations to the transmit power.

The FCC says that the maximum transmitter output power that goes towards the antenna can go up to 1 Watt (30dBm), but the EIRP caps that limit to 36dBm. The EIRP is the sum between the maximum output power that goes towards the antenna and the antenna gain.

antenna-connectors
Mikrotik Netmetal AC2 – free to add whichever antennas you like.

This means that the manufacturer is free to try different variations between the power output and the antenna gain as to better reach the client devices, while keeping that limit in mind.
This factor has not changed from the previous WiFi standard, so, the WiFi 6 has the same limit put in place as the WiFi 5 (and the previous wireless standards). The advice is to still lower the transmit gain as much as possible for the 2.4GHz radio and to increase it to the maximum for the 5GHz radio. That’s because the former radiates a lot better through objects, while the latter does not, but it provides far better speeds.

2. The Antenna Gain

This ties in nicely with the previous section since, just like the output power, the antenna gain needs to be adjusted by the manufacturer within the limits dictated by the FCC. And there is an interesting thing that I noticed with the newer WiFi 6 routers, something that was not common with the previous gen routers. The antennas can’t be removed on most routers, only on the most expensive models.

This means that in most cases, you can’t upgrade the antennas, potentially having a better range. Before, you could take an older router, push the transmit power to the maximum (you could also push it past its hardware limits with DDWRT or some other third-party software) and then add some high-gain antennas.

freedom-vpn-router
Old TP-Link router.

This way, the range could have been better, but could you actually go past the allowed limit? The chipset inside the router most likely kept everything within the allowed limit, but you could still get closer to that limit. Would you see any benefit though? That’s another story because years ago, when there were way fewer wireless devices around, pushing everything to the maximum made sense due to the less amount of interference.

https://27fff5b5ac97d948d0dc8ddf631b9ee7.safeframe.googlesyndication.com/safeframe/1-0-40/html/container.html?upapi=true

Nowadays, you’re just going to annoy your neighbors, while also making a mess of your WiFi clients connection. Sure, you will connect to a faraway client device, but will it be able to transfer data at a good speed? Doubt it, so it will just hog the entire network. The WiFi 6 standard does help alleviate this problem a bit, but we’ll talk more about it in a minute.

3. The WiFi Interference

This factor comes in different flavors. It can be from other devices that use the same channel, other access points that broadcast the signal through your house over the same channels or it can even be from your microwave. Ideally, you want to keep your WiFi inside your home, so that it doesn’t interfere with the WiFi signal from other routers or dedicated access points. Which is why the 5GHz radio has become the default option for connecting smartphones, laptops, TVs or PCs, while the 2.4GHz is usually left for the IoT devices.

zyxel-wax-630s-comparison
Interesting antenna patterns to limit interference. Left: Zyxel WAX630S. Right Zyxel WAX650S.

At least this has been true for the WiFi 5 routers because the WiFi6 routers can use OFDMA on the 2.4GHz band and help push the throughput to spectacular levels (where it would actually be if there were little to no interference, it’s not an actual boost in speed). For example, the Asus RT-AX86U can reach up to 310Mbps at 5 feet (40MHz channel bandwidth), but very few routers implement it on both radios due to the cost constraints.

For example, the Ubiquiti U6-LR only uses OFDMA on the 5GHz radio band, further showing the tendency to leave the 2.4GHz for the IoT devices. Now let’s talk about the walls. There are two main behaviors that you need to keep in mind. First, there’s the obstacle aspect which is obvious since you can see that when you move your client device in another room than your router, the signal drops a bit. Moving it farther will add more attenuation and the speed will drop even more.

For example, I have an office that’s split into two by a very thick wall so, on paper, one router positioned in the middle should suffice for both sides, right? Not quite because this wall is very thick and made of concrete, so it works as a phenomenal signal blocker.

asus-rt-ax86u-router
Asus AiMesh.

That’s why I needed two routers in the middle of the office to cover both sides effectively. The other aspect is signal reflection. What this means is that if you broadcast the signal in the open, it will reach let’s say up to 70 feet, but, if you broadcast it in a long hallway, you can get a great signal at the end of the hallway (could be double the distance than in the open field). But this also means that you may see some very weird, inconsistent coverage with your client devices.

What about the client devices?

This is a very important factor that is often overlooked when people talk about WiFi range and it’s incredibly important to understand the role of the network adapter especially in regard to the WiFi 6 client devices. First of all, understand that not all client devices are the same, some have a great receiver which can see the WiFi signal from very far away, others are very shy and want to be closer to the router. Then, there’s the specific features compatibility.

MU-MIMO, Beamforming and now the OFDMA have become a standard with newer routers, but, if the wireless client devices don’t support these features, it doesn’t really matter if they’re implemented or not. And this is one of the reasons why you may have noticed (even in my router tests) that a WiFi 5 client will most likely yield similar results when connected to a WiFi 5 router as well as when it’s connected to a WiFi 6 router.
So, if you want to see improvements when using WiFi 6 routers, make sure that you have compatible adapters installed in your main client devices. Otherwise, there is no actual point to upgrading.

wifi-6-range
WiFi 6 adapter.

How can OFDMA improve range?

Yes, yes, I know OFDMA was not designed to improve the speed, nor the range of the network, but even so, the consequences of its optimizations are exactly these. A better throughput and a perceived far better range. The Orthogonal Frequency-Division Multiple Access breaks the channel frequency into smaller subcarriers, and it assigns them to individual clients.

So, while before, one client would start transmitting and every other client device had to wait until it was done, now, it’s possible to get multiple simultaneous data transmissions, greatly improving the efficiency of the network and significantly lowering the latency (which is excellent news for online gaming). I have talked about how a far-away client device can hog the network when I analyzed the best settings for the transmit power – that was because it would connect to the AP or router and transmit at a very low data speed rate.

Using OFDMA, in this type of scenario, it can improve the network behavior and, even if the range itself isn’t changed, due to the way the networks are so much denser nowadays, you’ll get a more efficient network behavior for both close and far away client devices. So yes, better range and more speed.

BSS Coloring to tame the interference

I already mentioned that the interference from other APs or wireless routers will have a major impact on the perceived range of your network.

wifi-6-range-engenius
Image Source

And one of the reasons is the co-channel interference which occurs when multiple access points use the same channel and are therefore constrained to share it between them. As a consequence, you get a slower network because if there are lots of connected clients, they’ll easily fill up the available space. The BSS coloring assigns a color code to each client device which is then assigned to its closest access point.

This way, the signal broadcast is reduced from the client side as to not interfere with the other APs or client devices in the proximity. Obviously, the power output is still high enough to ensure a proper communication with the AP. And I know you haven’t seen this feature advertised as much on the boxes of APs or routers, which is due to cost constraints. I have seen it on the EnGenius EWS850AP, a WiFi 6 outdoors access point which is a device suitable for some very specific applications, but not on many other WiFi 6 networking devices.

Besides cost, the reason why it’s not that common especially on consumer-type WiFi 6 routers is that it’s not yet that useful. I say that because unless all the clients in the area are equipped with WiFi 6 adapters, the WiFi 5 (and lower) client devices will still broadcast their signal as far away as they can, interfering with the other WiFi devices.

Do WiFi 6 routers actually have a better range?

In an ideal, lab environment, most likely not, since as I said, the idea is to handle denser networks and not to push the WiFi range farther.

asus-rt-ac86u-vs-rt-ax86u
Asus RT-AC86U vs RT-AX86U.

But in real-life conditions, you should see a far better perceived range if the right conditions are met. And almost everything revolves around using WiFi 6 client devices that can actually take advantage of these awesome features. It’s also wise to adjust the settings of your router or AP accordingly since the default values are very rarely good. Ideally, so should your neighbors since only this way, you will see a proper improvement in both range and network performance. Otherwise, there is barely any reason to upgrade from the WiFi 5 equipment.

At the same time, it’s worth checking out the WiFi 6E which adds a new frequency band, the 6GHz, which can actually increase the throughput in a spectacular manner since the radio is subjected to far less interference (the range doesn’t seem changed though). I have recently tested the EnGenius ECW336 which uses this new standard and yes, it’s a bit pricy, but Zyxel has released a new WiFi 6E AP that is a bit cheaper, and I will be testing it soon.

Source :
https://www.mbreviews.com/do-wifi-6-routers-have-better-range/

How many Watts does a PoE switch use – Are the newer network switches more power efficient?

OCTOBER 31, 2022 BY MARK B

In light of the current global price hikes for energy, you’re very much justified in worrying about how many Watts your PoE switch actually uses. And, unless you have solar panels to enable your ‘lavish’ lifestyle, you’re going to have a bad time running too many networking devices at the same time, especially if they’re old and inefficient. But there’s the dilemma of features. For example, if we were to put two TVs together, an older one and a newer, it would be obvious that the latter would consume less power.

engenius-ecs2512fp
EnGenius ECS2512FP Switch with lots of Ethernet cables.

But, after adding all the new features and technologies which do require more power to be drawn, plus the higher price tag and it becomes clear that it’s less of an investment than we initially thought. Still, the manufacturers are clearly pushing the users towards the use of PoE instead of the power adapter – the newer Ubiquiti access points only have a PoE Ethernet port.

And it makes sense considering that they’re easier to install, without worrying about being close to a power source, no more used outlets and the possibility to have centralized control via a PoE switch. But, for some people, all these advantages may fall short if the power consumption of such a setup exceeds the acceptable threshold, so, for those of you conflicted about whether you should give PoE Ethernet switches a try, let’s see how much Watts they actually consume.

Table of Contents

Old vs new PoE switches – Does age matter?

The PoE standard started being implemented into network switches about two decades ago and it became a bit more common for SMBs about 10 years ago. The first PoE switch that I tested was from Open Mesh (the S8) and it supported the IEEE 802.3at/af.

open-mesh-s8
Open Mesh S8 Ethernet Switch.

This meant that the power output per port was 30 Watts, so it can’t really be considered an old switch (unless you take into account that Open Mesh doesn’t exist anymore). But I wanted to mention this switch because while the total power budget was 150 Watts, it did need to rely on a fan to keep the case cool. Very recently I tested the EnGenius ECS2512FP which offers almost double the PoE budget, 2.5GbE ports and it relies on passive cooling.

So, even if it may not seem so at first, even in the last five years, there have been significant advancements in regard to power efficiency. Indeed, a very old Ethernet switch that supports only the PoE 802.11af standard (15.4W limit per port) most likely needed to be cooled by fans and was not really built with the power efficiency aspect in mind. Before I get an angry mob to scream that the EEE from the IEEE stands for Energy-Efficient Ethernet, so adhering to the 802.3af standard should already ensure that the switch doesn’t consume that much power, I had another standard in mind.

fs-s3150-8t2fp-switch-aps
Multiple wireless access points.

It’s the Green Ethernet from the 802.3az standard that made the difference with network switches that had lots of Ethernet ports. And this is an important technology because it makes sure that if a host has not been active for a long time, then the port to which is connected enters a sort of stand-by mode, where the power consumption is significantly reduced.

The port will become active again once there is activity from the client side, so the switch does ping the device from time to time (what I want to say is that the power is not completely turned off). So, if the network switch is older, it may not have this technology which means that you may lose a few dollars a month for this reason alone.

How many Watts does a PoE switch use by itself?

It depends on the PoE switch that you’re using. A 48-port switch that has three fans which run at full speed all the time is going to consume far more power than the 8-port unmanaged switch. You don’t have to believe me, let’s just check the numbers. I was lucky enough to still have the FS S3400-48T4SP around (it supports the 802.3af/at and has a maximum PoE budget of 370W), so I connected it to a power source and checked how many Watts it eats up when no device is connected to any of the 48 PoE ports.

https://7c1d08747cb9cbcd862e797e24cb0163.safeframe.googlesyndication.com/safeframe/1-0-40/html/container.html?upapi=true
switch-watts-power-consumption-fs
FS S3400-48T4SP – 1st: no devices connected. 2nd: TP-Link EAP660 HD connected. 3rd: Both the EAP660 HD and the EAP670 connected.

It was 24.5 Watts which is surprisingly efficient considering the size of the switch and the four fans that run all the time. The manufacturer says that the maximum power consumption can be 400W, so the approx. 25W without any PoE device falls within the advertised amount. Next, I checked the power consumption of the Zyxel XS1930-12HP.

This switch is very particular because it has eight 10Gbps Ethernet ports and it supports the PoE++ standard (IEEE 802.3bt) which means that each port can offer up to 60W of PoE budget per device. At the same time, the maximum PoE budget is 375 Watts and, while no device was connected to any port, the Ethernet switch drew an average of 29 Watts (the switch does have two fans).

switch-watts-power-consumption-zyxel
Zyxel XS1930-12HP – 1st: no devices connected. 2nd: TP-Link EAP660 HD connected. 3rd: Both the EAP660 HD and the EAP670 connected.

Yes, it’s more than the 48-port from FS, so it’s not always the case that having more ports means that there is a higher power consumption – obviously, more PoE devices will raise the overall power consumption.

Unmanaged vs Managed switches

Lastly, I checked out the power consumption of an unmanaged switch, the TRENDnet TPE-LG80 which has eight PoE ports, with a maximum budget of 65W. The PoE standards that are supported are the IEEE 802.3af and the IEEE 802.3at, so it can go up to 30W per port. That being said, the actual power consumption when there was no device connected was 3 Watts.

switch-watts-power-consumption-trendnet
TRENDnet TPE-LG80 – 1st: no devices connected. 2nd: TP-Link EAP660 HD connected. 3rd: Both the EAP660 HD and the EAP670 connected.

Quite the difference when compared to the other two switches, but it was to be expected for a small unmanaged Gigabit PoE switch.

Access Points: PoE vs Power adapter

I am not going to bore you with details. You know what an access point is, and you also know that some have a power adapter, while some don’t. So, I took the TP-Link EAP660 HD and the EAP670 (because I had them left on the desk after testing them) and I checked if the power consumption differs between PoE and using the provided adapter. Also, I connected the APs to the three switches mentioned above to see if there’s a difference in PoE use between brands and between managed and unmanaged switches.

The TP-Link EAP660-HD draws an average of 6.9 Watts when connected to the socket via the power adapter. The EAP670 needs a bit less, since the average was 6.4 Watts. When connected to the 48-port FS S3400-48T4SP, the EAP660 HD needed 7.7W from the PoE budget, while the EAP670 added 7.6W, so, overall, the power consumption is more elevated. Moving on to the PoE++ Zyxel XS1930-12HP switch, I saw that adding the TP-Link EAP660HD, it required 10.5W and, connecting the EAP670 meant that an additional 6.8W which is quite the difference.

switch-watts-power-consumption
Comparison Access Points: PoE vs Power adapter.

Obviously, neither access points were connected to any client device, so there should be no extra overhead. In any case, we see that the PoE consumption is once again slightly more elevated than using the power adapters. Lastly, after connecting the EAP660 HD to the unmanaged TRENDnet TPE-LG80, the power consumption rose by 10 Watts, which is in line with the previous network switch. Adding the EAP670, it showed that an extra 6.8W were drawn, which is again, the same value as on the previous switch.

As a conclusion, we can see objectively that using the power adapter means less power consumption and that’s without taking into account the power needed to keep the switch itself alive.

Does the standard matter?

I won’t really extrapolate on all the available PoE switches on the market, but in my experience, it does seem that the PoE++ switches (those that support the 802.3at standard) do consume more power than the 802.3af/at switches, so yes, the standards do matter. Is it a significant difference?

switch-watts-power-consumption-devices
The switches and the access points that I just tested.

Well, it can add up if you have lots of switches for lots of access points but bear in mind that most APs will work just fine with the 30W limitation in place, so, unless you need something very particular, I’m not sure that the PoE++ is mandatory. For now, since it’s going to become more widespread and efficient in time.

Passive cooled PoE switches vs Fans

This one is pretty obvious. Yes, fans do need more power than a passive cooling system, so, at least in the first minutes or hours, the advantage goes to the passive cooling. But things do change when the power supply and the components start to build heat which makes the entire system less efficient than the fan-cooling systems.

Source :
https://www.mbreviews.com/how-many-watts-does-a-poe-switch-use/

What are Spatial Streams? And does the number of spatial streams actually matter?

AUGUST 6, 2022 BY MARK B

The spatial streams are connections made between the router and the client device where data is being sent. To get an even better grasp of what I am talking about, we need to go way back, down to the WiFi 3 (IEEE 802.11g standard) and lower which used what is called SISO systems (Single Input Single Output). The idea was to use a single transmitter antenna and the signal would get received by the access point on a single antenna.

And it’s true that the early days of WiFi routers were promising, but also quite rough because without clear line of sight, the AP could experience reflections of the signal in the room (multi-path fading), the risk to experience the cliff effect if there are too many interference and more. Obviously, these problems were mostly fixed with the emergence of the MIMO, which uses multiple transmission antennas to send the signal towards multiple reception antennas.

spatial-streams-siso
SISO (Single Input Single Output)

In other words, the slightly more modern approach is to use multiple spatial streams to send and receive the data. Then there’s the MU-MIMO which takes things to another level. And I know you came here to understand what the numbers on the router box actually mean, if MU-MIMO actually matters and if support for 4×4, 8×8 or 16×16 (and more) are something that your wireless router (or separate access point) needs to have. You will see that most of is just over-the-top advertising with little to no real-life improvements to the WiFi performance, so let’s see why that is. Before that, let’s get a better understanding of the spatial streams and MIMO.

Table of Contents

Spatial Stream and MIMO

We already established what the SISO is, but there are some other configurations that the manufacturers have explored before using the MIMO approach. For example, the SIMO (Single Input Multiple Output) uses more than one receiver antennas on the same radio to capture the signal, so it has more than one chances to be properly processed. And there’s also the MISO approach where the signal is broadcasted across more than one stream with a single antenna receiving it.

The MIMO is the better form, where the same signal is transmitted across multiple streams and it is also received by multiple antennas. But, it’s not that it chooses which signal is the better one, no, all get processed and the end result is what the receiver interprets to be the original signal based on what it received at different intervals, with various amounts of data loss and so on. What we previously discussed is called spatial diversity where the same signal gets transmitted across multiple spatial streams towards multiple antennas, therefore keeping the risk of degradation to the minimum, but there are other approaches as well.

spatial-streams-mimo
MIMO – Spatial Diversity and Spatial Multiplexing.

One of them is called spatial multiplexing where the idea is to increase the data transfer rate since more than one independent stream of data is transmitted via multiple streams. The risk comes from interference which is why the data streams aren’t transmitted at the same time, but are phased out at different points in time. Another method that helps move data without risking collision or interference is by dividing the bandwidth into multiple frequency bands, each used to stream an independent and separate signal.

It’s also know as FDM, but I am sure you may have also heard about the OFDM which moved data a bit different. To make the bandwidth use even more efficient, the carriers are orthogonal. This means that instead of being far apart, as they were with the FDM, with the OFDM, they are more densely packed and the distance between carriers is minimal since there is little adjacent channel interference.

Spatial Streams and MU-MIMO

MU-MIMO (Multiple User Multiple Input Multiple Output) is supposed to be some sort of holy grail for handling multiple demanding client devices. That’s because while SU-MIMO (or MIMO) can handle one client device at a time, the MU-MIMO should serve more than one devices at the same time.

linksys-ea8500
MU-MIMO – Linksys EA8500.

If you don’t yet know, the way the client devices are handled ‘in the traditional sense’ (SU-MIMO), is first arrived, first served. So, if the device is connected at a high data transfer rate, it receives or sends the data quickly and lets another device to be served. With the modern hardware, you won’t even notice that your WiFi devices actually take turns. That it, unless you start streaming large packets of data at the same time on multiple devices which is where you’re going to start seeing the buffering icon.

Furthermore, be aware that devices that are far away and are connected at a lower data transfer rate are going to slow down the network because it will take longer to finish up the task (which is why it’s better to avoid legacy devices and to not increase the transmit power on your access point).

MU-MIMO doesn’t really change the way it handles a single client, but it can do the same for more than one devices at the same time. Imagine that your router starts behaving as if it were two, four or more routers at the same time. This way, the client devices don’t have to wait one after the other. The problem is that MU-MIMO doesn’t seem to rise to the expectations. Yet.

Is MU-MIMO under performing?

On paper, it shouldn’t. And the router boxes do have the theoretical maximum data transfer rates printed in bold letters and numbers. So, the first culprit is the advertisement. You know that Asus, TP-Link, Linksys or Netgear router that seemingly should reach 6,000Mbps (AX6000) or more since we also have AX1100 routers now? Well, you’re not going to see those number in real life.

netgear-nighthawk-rax43-front
Netgear RAX43.

Actually if you’re checking the single stream performance, it most likely won’t even get close to 1Gbps. So, what’s the deal? Well, the manufacturers add up the maximum possible rate for each radio, which, in turn is based on the maximum amount of data streams that can be handled at the same time. This means that using MU-MIMO, you’re going to actually see a better performance? Well, not as much as you’d have hoped and in some cases, you may actually see a worse performance.

At least two sources (1)(2) have confirmed that not only did they not see a better performance when using MU-MIMO devices, but in some cases it was actually a bit worse. That’s not because the technology is bad, it’s because the WiFi adapters just aren’t that great. Most PC adapters, laptops and smartphones are still stuck with a 2×2 MU-MIMO WiFi adapter. And both Qualcomm and Broadcom chipsets seem to drop to 1×1 even if the client devices were 2×2, while the router was 4×4. These tests were done with WiFi 5 hardware, where MU-MIMO was limited to downstream only. So has anything changed with WiFi 6?

Besides adding support for MU-MIMO upstream as well, it does seem that MU-MIMO does offer some improvements with WiFi 6 client devices and access points, but only detrimental. So, it seems that MU-MIMO can be useful in only very specific scenarios, in a very crowded network, where the client devices don’t move around.

wifi-6-range
WiFi 6 adapter on a Desktop PC.

But, in most cases, it’s still a borderline gimmick that manufacturers like to put on their box to sell the router. That’s because the client devices are still way behind the WiFi technological advances and the consumer routers are underpowered. Still, if you have multiple 4×4 MU-MIMO PCs and a powerful WiFi 6 access points, you may see a benefit if your network is pushed to the limit.

Beamforming

You may have seen the term Beamforming being advertised alongside MU-MIMO on the wireless router / AP boxes and it refers to a very interesting technique where the signal is transmitted towards the connected clients and not broadcasted everywhere. The way the wireless routers (or access points) do this is by identifying the compatible receiver and then increasing the power output (including the transfer data rates) only towards that client device. The particularity of using Beamforming is that it’s effective only for medium-range transmissions.

If the client device is close enough to the router, then it’s already at a high transfer rate and it doesn’t need to use Beamforming. The same is true if the client device is too far because the gain from Beamforming will not be enough to increase the data transfer rate. But what’s even more interesting is that despite being advertised as a technology that’s going to change the way your devices connect to the network, it’s actually very rarely used with commercial devices. That’s because of the aforementioned antenna gain.

tp-link-archer-ax50-beamforming
Source: TP-Link official website.

Beamforming works best with Point to Point access points because the idea is to focus the signal over very large distances with clear line of sight, without worrying about going above some set limit. Indoors, there is a limit set by EIRP and your access point or wireless router will make sure it won’t go above it. So, even if the Beamforming is able to push way past that limit (for example, three or four beamforming antennas can easily go past the 6dbi maximum gain), the transmit power will be severely cut.

But there is more because it seems that the WiFi 5 and WiFi 6 routers (and access points) will prioritize spatial multiplexing over beamforming, especially on the 4×4 and lower devices. Obviously, the one at a time approach still applies here as well, and the AP will switch dynamically between the supported modes when handling a client device. Even so, having the support for more multiple spatial streams, the better for the signal, right? Yes, the more spatial streams that are available, the more ways to properly transmit the data you will have, ensuring that it arrives at the destination quickly and as intact as possible.

Bibliography:
(1) ScienceDirect.com
(2) SmallNetBuilder.com

Source :
https://www.mbreviews.com/what-are-spatial-streams/

How to Log in SSH of Yeastar S-Series VoIP PBX

Yeastar Support Team
January 15, 2024 21:11

This article introduces how to log in SSH of Yeastar S-Series VoIP PBX, K2, TAv3 and TGv3 Gateway.

This article shall not apply to other Yeastar products. Please refer to How to log in SSH of Yeastar MyPBX, N-Series Analog PBX, and VoIP Gateway

How to Login via SSH

  1. Download the popular SSH tool PuTTy from the Internet.
  2. Log in Yeastar S-Series IPPBX web interface, navigate to Settings > System > Security > Service, Enable SSH. And note that since version 30.7.0.27, the SSH password had been changed to random password which you could see when you enable the SSH service.
  3. Open the PuTTy, and enter the login IP, Port and Connection type(SSH).  
    • Host Name: IP address of the Yeastar S-Series IPPBX
    • Port: default SSH port is 8022
  4. Set the scrollback line number so that you can get sufficient lines of log for debug analysis.  
  5. Click the Apply to enter SSH interface. Log in SSH with the following credential:
    • Username: support
    • Password: iyeastar (or random password)

Note: when you enter the password, it’s in invisible form that you can’t see what you are inputting. 

Command Mode and Asterisk Mode

Next we introduce you the important 2 modes in SSH interface: Command mode and Asterisk mode.
After you enter the SSH interface, it is the Command mode. In this mode, you can execute the Linux based commands, like lscdroute and so on.

To enter Asterisk mode, you can input asterisk -vvvvvvvvvr in Command mode. In this mode, you are able to execute the Asterisk based commands, like pjsip show endpoints, pjsip set logger on and so on.

Source :
https://support.yeastar.com/hc/en-us/articles/115004259608-How-to-Log-in-SSH-of-Yeastar-S-Series-VoIP-PBX