There are a lot of ways to render images. Whenever the images need to be scaled, the result won't be as good as when the images are at 100% zoom because filtering has to be applied to map image pixels to screen pixels. The bigger the difference between the size of your images and the size they are shown on screen, the more scaling is needed and the more apparent the filtering artifacts will be.
What kind of filtering is used on your images? Linear or bicubic will produce decent results. When downscaling quality can be improved by using mipmaps and anisotropic filtering.
However, it may be easier to use images that are close to the size the images will be displayed on screen. Some apps are distributed with multiple texture atlases at different scales, then use the one that is most appropriate for the user's screen size. This is easy and produces pretty good results. There will almost always be some mismatch between images and screen resolution unless rendering vector graphics (which is difficult, uses more resources, and Spine doesn't support). The difference between multiple atlases and vector is typically pretty small.