Manual review request

The latest Visual LVM Remote submitted.
We have tried best to remove interfaces as instructions.
The current version is the one that uses the least interface, and we hope it can pass the review.
thank you

i moved your post to the store-requests category so the reviewers see it …

Hey @weLees,

Please remember there is a process to request auto-connections. When creating a post, you should explain which interfaces you are requesting auto-connection, and the technical reason behind that request.

Still, I found there is a related post where you already provided explanation but the latest snap declaration is not matching the original request.

Also, by taking a look at the latest version, I see you are still using system-files for accesses we already mentioned other interfaces should be used instead. The system-files interface is typically used to provide read-only access to system configuration directories created by a non-snap version of an application now running from an equivalent snap which is not this case.

Did you try plugging system-observe and hardware-observe (or others) as suggested?

I recommend running snappy-debug while troubleshooting, it will recommend interfaces based on the behavior it observes in your snap.

@weLees ping, can you please provide the requested information?

Hi emitorino, Sorry for the late reply, but we are currently solving an urgent issue of Visual LVM, we will try after completion

1 Like

Hi, the new version(4.4) submitted, we remove system-files interface. Please review thanks

I notice your snap is still using system-files:

    interface: system-files
    - /run/lvm
    - /sys/firmware/dmi/tables
    - /sys/devices
    - /proc
    - /etc/visual_lvm
    - /dev/mapper
    - /run/lvm
    - /run
    - /dev
    - /sys/devices/virtual/bdi
    interface: system-files
    - /dev/null

So this has not been removed. Can you please clarify?

In the latest version 4.4.779 We try best to remove system-files referencing, only 3 left

interface: system-files
  - /dev    ------for fdisk enumerate disk
  - /sys/devices    --------to get disk information(vendor/type)
  - /proc           ---------to enumerate md(raid) device

Without them, we can’t show user the storage information

Hello @weLees, did you try to use the system-observe and hardware-observe as mentioned in the previous comments?

Yes, we have tried but fail…

@weLees apologize for this long discussion but we are trying to help you follow the best practices for snapping your application.

It is not correct to grant the wide access to /proc, /dev etc without really understanding why you are not able to make your snap work using the suggested interfaces. Can you please try to use them and share here the AppArmor denials you are seeing? This will be very helpful for us to get a better understanding and hopefully help you make your snap work the correct way.

If this helps, this is a recent application we have granted declarations Approval request for list-filesystem - lfs and is able to list drives utilization by plugging some of the interfaces we have suggested.


@weLees - hello, can you please provide the requested information? What denial messages are you seeing?

Hi, rev480 subbmitted, it uses followed plugs:

    command: bin/vlvmservice start $SNAP_DATA
    command: bin/vlvmservice stop $SNAP_DATA

    command: bin/vlvmservice restart $SNAP_DATA

Thanks for the update - so the only interface in this list which is quite privileged is block-devices - this essentially allows a snap to control the entire device - can you please explain why this is required for Visual LVM Remote? Also note that to be granted access to this interface, publisher vetting would be required as well.

Visual LVM need to read metadata information of LVM from PV(disk/partition). So we have to access the block-devices such as /dev/sdx /dev/hdx

Can you please try and be more specific on what exactly is required here? Are there particular files that are being accessed and if so can you please detail these?

I wonder if mount-observe might be sufficient for this purpose? Or perhaps you can use udisks2 to query this information via DBus? The more information you can provide on what exactly Visual LVM Remote requires then the more help we can give to come to a solution. Thanks.

Hi, thanks for your reply. To detect LVM components, we need to read the head(sector 1-4) of PV to get PV information, and read head and/or tail of device to get VG information(it depends settings). So we have to read head/middle/tail of disk for disk & partition can be PV.

Hello! So Visual LVM had been refused?

Hey @weLees, apologize for the delay. Visual LVM has not been refused!

Thanks a lot for updating visual-lvm-remote and applying all the suggestions provided. I am then +1 for auto-connect hardware-observe and system-observe to visual-lvm-remote. I am +1 for use but not auto-connect of block-devices due to the sensitiveness of the accesses when granted.

Can other @reviewers please vote?

+1 from me too for use-of and auto-connect of hardware-observe and system-observe to visual-lvm-remote. Similarly, +1 only for use-of but not auto-connect of block-devices as this grants device ownership to the snap. However, regardless, we still require publisher vetting in this case as well. @advocacy can you please perform publisher vetting?