iOS Development

Why your SwiftUI app feels slow even though Instruments says it’s fine?

19 min readRafał Dubiel
#SwiftUI#Performance#UX#iOS

Imagine this situation: QA reports that the settings screen “stutters.” You fire up Instruments, profile it - CPU at 12%, zero dropped frames, Time Profiler shows clean stacks. You close the ticket with the comment “cannot reproduce.” Two weeks later, the same feedback shows up in App Store reviews.

The problem is not in what you measure. The problem is in what you don’t measure.

Two worlds of performance

In iOS engineering, there are two separate performance metrics that rarely meet on the same dashboard.

Measured performance - performance you can measure: FPS, frame render time, CPU time, memory allocations. This is the domain of Instruments. Hard, objective, repeatable data.

Perceived performance - performance as experienced by the user: “does this app feel fast?” And this is where the problem starts, because perceived performance does not correlate linearly with measured performance. An app can hold a steady 60 FPS and still feel slow.

UX research confirms this effect - screens with skeleton loading make users perceive loading as 20–30% faster than when using spinners, even when the real loading time is identical. That’s not magic - that’s the psychology of time perception.

This article is not about optimizing a view’s body or debugging it with _printChanges(). It’s about what makes users feel that an app is slow and how to fix it at the architectural level.

Problem #1: silence after a tap

The most common perceived-performance sin in SwiftUI apps is the lack of immediate feedback after user interaction.

The user taps a button. For 300 ms, nothing happens. Then suddenly a new screen appears. From Instruments’ perspective - everything is fine, the push took 280 ms. From the user’s perspective - “the app froze.”

Why does this happen? Because in a typical MVVM architecture with async/await, the whole chain looks like this:

// Typical pattern - silence after a tap
func didTapOrder() {
    Task {
        let result = await orderService.placeOrder(cart)
        switch result {
        case .success(let confirmation):
            navigateToConfirmation(confirmation)
        case .failure(let error):
            showError(error)
        }
    }
}

The user taps, nothing changes on screen, and then there’s a sudden jump to a new state. This is classic passive waiting - the user gets no information that the system even acknowledged their intent.

The solution is an immediate UI state change at the moment of interaction, before we even know the result of the operation:

// Immediate feedback + optimistic state
func didTapOrder() {
    // 1. Immediately: change UI state
    orderState = .processing

    // 2. Haptic feedback - physical confirmation
    // Note: if your view already has a trigger value for .sensoryFeedback(),
    // prefer that modifier over UIImpactFeedbackGenerator - it is declarative,
    // does not require manual instance management, and automatically handles
    // Taptic Engine preparation. More on this in Problem #4.
    UIImpactFeedbackGenerator(style: .medium).impactOccurred()

    Task {
        let result = await orderService.placeOrder(cart)
        switch result {
        case .success(let confirmation):
            orderState = .completed(confirmation)
        case .failure(let error):
            // 3. Roll back to the previous state
            orderState = .idle
            showError(error)
        }
    }
}

The key difference: the user immediately sees a change and feels a vibration. Even if the request takes 2 seconds, they do not feel ignored.

Problem #2: the dead screen during navigation

SwiftUI’s NavigationStack has one trait that kills perceived performance: onAppear and .task fire in parallel with the transition animation, but the destination view renders with empty data.

In practice, it looks like this: the user taps a list item, the push animation starts, and the destination screen appears as an empty ProgressView - only to fill with data 500 ms to 2 seconds later.

The problem gets worse when the destination view fetches data in .task:

// The screen appears empty
struct ProductDetailView: View {
    let productId: String
    @State private var product: Product?

    var body: some View {
        Group {
            if let product {
                ProductContent(product: product)
            } else {
                ProgressView() // This is all the user sees for ~1s
            }
        }
        .task {
            product = await productService.fetch(productId)
        }
    }
}

For a moment after the push animation, the user is staring at a spinner - those are the milliseconds in which they lose trust in the app’s responsiveness.

Strategy: preloading + skeleton

The solution has two parts:

First - pass as much data as you already have into the destination view at the moment of navigation:

// Pass data you already have
NavigationLink(value: Route.productDetail(product.summary)) {
    ProductRow(product: product.summary)
}

// Destination view immediately displays the summary
struct ProductDetailView: View {
    let summary: ProductSummary // We have this right away
    @State private var fullProduct: Product?

    var body: some View {
        ScrollView {
            // Header visible immediately - from data we already have
            ProductHeader(
                name: summary.name,
                price: summary.price,
                imageURL: summary.thumbnailURL
            )

            // Details: skeleton → content
            ProductDetails(product: fullProduct)
                .redacted(reason: fullProduct == nil ? .placeholder : [])
        }
        .task {
            fullProduct = await productService.fetchFull(summary.id)
        }
    }
}

Second - use .redacted(reason: .placeholder) instead of ProgressView:

// Skeleton with shimmer - the user sees structure immediately
struct ProductDetails: View {
    let product: Product?

    var body: some View {
        VStack(alignment: .leading, spacing: 12) {
            Text(product?.description ?? String(repeating: " ", count: 120))
                .font(.body)

            HStack {
                Label(product?.category ?? "Placeholder", systemImage: "tag")
                Spacer()
                Label(product?.rating ?? "0.0", systemImage: "star.fill")
            }
            .font(.subheadline)
        }
        .redacted(reason: product == nil ? .placeholder : [])
        .modifier(ShimmerModifier(isActive: product == nil))
    }
}

The difference in perception is dramatic: instead of emptiness → spinner → content, the user sees header → skeleton with shimmer → full content. Each stage is visually consistent with the next, which reduces Cumulative Layout Shift (yes, that’s a term from the web’s Core Web Vitals, but it describes the exact same problem on mobile).

A note on shimmer: you do not need to write your own ShimmerModifier from scratch. The open-source package SwiftUI-Shimmer gives you .shimmering(), which composes with .redacted(reason: .placeholder) in a single line. If you prefer to avoid dependencies, a lightweight gradient animation based on TimelineView + Date achieves the same effect in under 20 lines. The barrier to entry here is lower than most teams think.

Problem #3: a Task inside .task blocking MainActor

This is a trap that almost every team migrating from GCD to Swift Concurrency falls into. And it is especially insidious, because Instruments will show you a hang or a hitch - but it will not tell you why your async function is blocking the main thread.

The mechanism is simple and well documented, but it still surprises people: .task in SwiftUI inherits actor isolation from the view - and SwiftUI views are, starting with Xcode 26, automatically @MainActor-isolated. That means a Task created inside .task often runs on MainActor unless you explicitly move the work out of the UI context.

// Looks async, but blocks MainActor
struct SearchView: View {
    @State private var results: [Item] = []

    var body: some View {
        List(results) { item in
            ItemRow(item: item)
        }
        .task {
            // This entire function runs on MainActor!
            results = await viewModel.performSearch(query)
        }
    }
}

@MainActor
@Observable
final class SearchViewModel {
    func performSearch(_ query: String) async -> [Item] {
        // Looks async, but synchronous work here
        // blocks MainActor
        let parsed = heavyParsingOperation(query) // ← blocks UI
        let results = await api.search(parsed)
        return results.sorted(by: complexSortPredicate) // ← blocks UI again
    }
}

The user sees a frozen UI during heavyParsingOperation and complexSortPredicate, even though the whole function is async. Instruments will show a hitch, but it is easy to miss amid the noise.

Solution: explicitly move CPU-heavy work off MainActor:

// Heavy work explicitly off MainActor
@MainActor
@Observable
final class SearchViewModel {
    func performSearch(_ query: String) async -> [Item] {
        // Move heavy work into a nonisolated context
        let results = await Self.processSearch(query, api: api)
        return results
    }

    // nonisolated = does not inherit @MainActor
    private nonisolated static func processSearch(
        _ query: String,
        api: SearchAPI
    ) async -> [Item] {
        let parsed = heavyParsingOperation(query) // Does not block UI
        let results = await api.search(parsed)
        return results.sorted(by: complexSortPredicate)  // Does not block UI
    }
}

The key is nonisolated - without it, a function inside an @MainActor class will still block the main thread, even if it is async.

Problem #4: animations that lie

SwiftUI gives you beautiful default animations, but default does not mean optimal from a perceived-performance perspective. Two aspects matter in particular:

No spring animation in user interactions

.easeInOut is appropriate for automatic transitions (loading, fade-in), but for direct user interactions - taps, drags, toggles - it lacks physicality. Users subconsciously expect the interface to behave like a physical object.

// Mechanical, “artificial” animation
withAnimation(.easeInOut(duration: 0.3)) {
    isExpanded.toggle()
}

// Spring-based - feels responsive and natural
withAnimation(.spring(response: 0.35, dampingFraction: 0.7)) {
    isExpanded.toggle()
}

Apple’s Human Interface Guidelines recommend transition durations of 300–500 ms with deceleration curves for end states and spring physics for attraction effects. .spring(response: 0.35, dampingFraction: 0.7) is a safe starting point - fast, with a subtle overshoot.

No haptic feedback on state changes

Haptic feedback is what separates a merely “usable” app from a pleasant one. Since iOS 17, the preferred approach is SwiftUI’s .sensoryFeedback() modifier - it is declarative, does not require managing UIImpactFeedbackGenerator instances, and automatically handles Taptic Engine warm-up. Use it instead of imperative UIImpactFeedbackGenerator whenever you have a trigger value you can bind to:

Button("Add to cart") {
    cartViewModel.add(product)
}
.sensoryFeedback(.impact(weight: .medium), trigger: cartViewModel.itemCount)

In cases where you need haptics imperatively (inside Task, in response to an async result), UIImpactFeedbackGenerator is still fine - just remember to call prepare() in advance if latency matters.

Rule of thumb: key actions that change visible state on screen should have accompanying haptic feedback. Not every action, obviously, but adding/removing items, switching modes, confirming actions - yes.

Problem #5: TabView and NavigationStack load too much at startup

Hardly anyone writes about this, but SwiftUI’s default TabView is not lazy. All tabs and their views are initialized at startup. If each tab has a .task that fetches data - congratulations, you just kicked off 4–5 network requests in parallel before the user has seen any content at all.

// All tabs initialized at startup
TabView {
    HomeView()      // .task → fetch home feed
    SearchView()    // .task → fetch trending
    ProfileView()   // .task → fetch user profile
    SettingsView()  // .task → fetch config
}

Solution: defer tab initialization until first display:

// Lazy tab initialization
struct LazyTab<Content: View>: View {
    @State private var hasAppeared = false
    let content: () -> Content

    init(@ViewBuilder content: @escaping () -> Content) {
        self.content = content
    }

    var body: some View {
        Group {
            if hasAppeared {
                content()
            } else {
                Color.clear
            }
        }
        .onAppear {
            if !hasAppeared {
                hasAppeared = true
            }
        }
    }
}

// Usage
TabView {
    LazyTab { HomeView() }
        .tabItem { Label("Home", systemImage: "house") }
    LazyTab { SearchView() }
        .tabItem { Label("Search", systemImage: "magnifyingglass") }
    // ...
}

The effect: only the first tab loads at startup. The rest wait until the user actually switches to them. This dramatically reduces Time to First Meaningful Content - the user sees the home feed faster because it is not competing for resources with four other requests.

In larger apps with 5+ tabs or complex per-tab dependency graphs, the LazyTab wrapper may not be enough. Consider full manual control with TabView(selection:) and conditional rendering of tab content based on a Set<Tab> of visited tabs - or go all the way with a custom tab bar built on NavigationSplitView, which gives you full control over view lifecycle.

Problem #6: Optimistic UI - a pattern nobody talks about in iOS

Optimistic UI is a well-known pattern in the web world (React, Remix, SWR), but it is surprisingly rare in iOS apps. The idea is simple: assume success, render immediately, roll back on failure.

Classic example - a “like” button:

// Wait for the server
func toggleLike() {
    Task {
        let success = await api.toggleLike(postId)
        if success {
            isLiked.toggle()
            likeCount += isLiked ? 1 : -1
        }
    }
}

// Optimistic UI - immediate change
func toggleLike() {
    // 1. Immediate UI change
    let previousState = isLiked
    let previousCount = likeCount

    isLiked.toggle()
    likeCount += isLiked ? 1 : -1

    // 2. Haptic feedback
    UIImpactFeedbackGenerator(style: .light).impactOccurred()

    // 3. Sync with the server
    Task {
        let success = await api.toggleLike(postId)
        if !success {
            // 4. Roll back on failure
            isLiked = previousState
            likeCount = previousCount
            // Show a subtle, non-blocking error
        }
    }
}

This pattern works for operations that almost always succeed: like/unlike, add to bookmarks, status change, mark as read. Do not use it for operations that can fail for business reasons (payments, limited-availability reservations) - and also avoid it when rollback is costly from a UX perspective. Canceling an order, changing a shipping address, removing a team member - these are cases where the user may already have mentally “moved on” after seeing an optimistic success, and a sudden rollback creates more confusion and erodes trust more than a short wait would.

Problem #7: .id() - the silent killer of laziness

This point is less well known, but extremely important in large lists. The .id() modifier on List or LazyVStack elements forces eager evaluation - SwiftUI must immediately initialize all views in order to build the identifier map.

// .id() forces initialization of ALL elements
ScrollView {
    LazyVStack {
        ForEach(items) { item in
            ItemRow(item: item)
                .id(item.objectID) // ← kills laziness!
        }
    }
}

If you have 1,000 items, SwiftUI must initialize all of them immediately - because it needs the full .id() map for correct diffing. This problem is often discussed in the context of Core Data, where .id(item.objectID) on a list with thousands of elements can add a full second to view display time.

Solution: rely on Identifiable conformance in ForEach instead of an explicit .id() modifier:

// Identifiable - SwiftUI manages identity lazily
ForEach(items) { item in // items: [Item] where Item: Identifiable
    ItemRow(item: item)
}

Problem #8: the UI “works,” but you can feel resistance

This is another problem you will almost never see in Instruments. CPU low. FPS stable. Zero dropped frames. And yet the UI feels heavy, as if it is “dragging behind your finger.”

The cause is excessive body invalidations - large parts of the view tree re-render on every tiny state change. Most often, the culprit is overly broad @State / @Observable scope.

How does this happen?

struct DashboardView: View {
    @State private var user: User
    @State private var notifications: [Notification]
    @State private var searchText: String = ""

    var body: some View {
        VStack {
            HeaderView(user: user)
            SearchBar(text: $searchText)
            NotificationsList(notifications: notifications)
        }
    }
}

Changing searchText causes the entire VStack to re-render, including:

  • HeaderView
  • NotificationsList

Views unrelated to search get recomputed too. Each one is fast on its own.

The problem appears at scale:

  • complex layouts
  • many modifiers
  • nested ForEach
  • animations

You get a series of small, cheap evaluations that together create the feeling of sticky UI, even though technically everything is fast. Why does Instruments stay silent?

Because:

  • a single render takes 1–3 ms
  • no individual operation crosses the hitch threshold
  • but several of those renders land in a single frame

This is exactly that case - measured fast, perceived slow.

Solution: narrow the scope of state

Rule: state should live as low in the view tree as possible

// Too broad
struct DashboardView: View {
    @State private var searchText = ""

    var body: some View {
        VStack {
            HeaderView()
            SearchBar(text: $searchText)
            NotificationsList(searchText: searchText)
        }
    }
}
// Isolation
struct DashboardView: View {
    var body: some View {
        VStack {
            HeaderView()
            SearchSection()
            NotificationsList()
        }
    }
}

struct SearchSection: View {
    @State private var searchText = ""

    var body: some View {
        SearchBar(text: $searchText)
    }
}

Changing searchText no longer touches the rest of the tree.

Break views apart

SwiftUI scales well only when views are small and isolated.

// instead of:
BigComplexView(data: data)

// better:
Header(data: data.header)
ContentList(items: data.items)
Footer(stats: data.stats)

Smaller data scope = fewer invalidations.

EquatableView (a specialist tool)

For expensive views that rarely change:

EquatableView(content: ExpensiveView(model: model))

or:

struct ExpensiveView: View, Equatable {
    static func == (lhs: Self, rhs: Self) -> Bool {
        lhs.model.id == rhs.model.id
    }
}

Use this selectively - it is not a systemic solution, just a point optimization.

Check what is really rendering by adding this inside body:

_printChanges()

If you see that changing one field causes half the screen to rebuild, you have a problem.

Problem #9: cascading .onChange / .onReceive / .task(id:) - the invisible refresh storm

This may be the most insidious killer of perceived performance, because it leaves almost no traces in Instruments. Each individual view update is fast - maybe 2 ms. But when one state change triggers a chain of .onChange → state mutation → .task(id:) → new state → another .onChange, you get 5–10 body evaluations in one frame. None of them individually registers as a hitch. Together, they produce a palpable “stickiness” - the UI responds, but it feels like it is slogging through mud.

A typical example is a filters/search screen where many controls depend on each other:

// Cascading reactivity - every modifier triggers the next
struct FilteredListView: View {
    @State private var searchText = ""
    @State private var selectedCategory: Category?
    @State private var sortOrder: SortOrder = .relevance
    @State private var filteredItems: [Item] = []

    var body: some View {
        VStack {
            TextField("Search", text: $searchText)
            CategoryPicker(selection: $selectedCategory)
            SortPicker(selection: $sortOrder)
            List(filteredItems) { item in ItemRow(item: item) }
        }
        .onChange(of: searchText) { _, _ in applyFilters() }
        .onChange(of: selectedCategory) { _, _ in applyFilters() }
        .onChange(of: sortOrder) { _, _ in applyFilters() }
    }

    func applyFilters() {
        // This runs 3x if changing category
        // also resets sortOrder, which also clears searchText
        filteredItems = items
            .filter(by: searchText, category: selectedCategory)
            .sorted(by: sortOrder)
    }
}

The problem: if changing selectedCategory also resets sortOrder (a common UX pattern), you get two .onChange calls from one user action, and applyFilters() runs twice, producing two separate state mutations and two body evaluations.

The solution: consolidate reactive triggers into one observation point:

// One derivation point - no cascade
struct FilteredListView: View {
    @State private var searchText = ""
    @State private var selectedCategory: Category?
    @State private var sortOrder: SortOrder = .relevance

    // Compute filtered items as a computed property (instead of storing them in state)
    // or use .task(id:) with a consolidated trigger
    private var filterToken: FilterToken {
        FilterToken(search: searchText, category: selectedCategory, sort: sortOrder)
    }

    var body: some View {
        VStack {
            TextField("Search", text: $searchText)
            CategoryPicker(selection: $selectedCategory)
            SortPicker(selection: $sortOrder)
            FilteredList(token: filterToken, allItems: items)
        }
    }
}

// Separate child view - reevaluates only when the token actually changes
struct FilteredList: View {
    let token: FilterToken
    let allItems: [Item]
    @State private var filteredItems: [Item] = []

    var body: some View {
        List(filteredItems) { item in ItemRow(item: item) }
            .task(id: token) {
                // Runs exactly once per unique token value -
                // even if many @State values changed in the same frame
                filteredItems = await FilterEngine.apply(token, to: allItems)
            }
    }
}

The key observation: .task(id:) with an Equatable token coalesces many state changes within a single frame into one task execution. Three .onChange callbacks become one .task(id:) evaluation. The user feels one clean update instead of a flickering cascade.

Rules for avoiding refresh storms: keep .onChange handlers free of state mutation (use them for side effects like logging or analytics, not for driving other state), prefer derived state (computed properties or .task(id:) with composite tokens) over reactive chains, and extract filtering/sorting logic into child views with their own .task(id:) lifecycle.

Decision matrix: when to use what

SymptomInstruments saysReal causeSolution
"The button doesn’t respond"CPU idleNo immediate feedbackOptimistic UI + haptics
"The screen takes too long to load"Network: 800 msEmpty UI while waitingSkeleton + data preloading
"The app stutters"200 ms hitchasync on MainActornonisolated + Task.detached
"The animation feels stiff"60 FPS.easeInOut instead of .springSpring physics + haptics
"Startup is slow"CPU burst at launchEager tab/view initializationLazyTab + lazy AppContainer
"The list stutters on entry"Spike in view init.id() forcing eager evalIdentifiable instead of .id()
"The screen feels sticky"All frames < 16 msExcessive body invalidationsNarrow state / split views
"The screen feels sticky (when filters change)"All frames < 16 msCascading .onChange.task(id:) with a composite token

Perceived performance checklist

Before every PR that introduces a new screen, ask yourself these questions:

  1. Does a tap produce immediate feedback? - UI state change + haptic should happen synchronously with the interaction.
  2. Does the user see the structure of the screen before the data arrives? - Skeleton with .redacted() instead of ProgressView.
  3. Am I passing into the destination view the data I already have? - Do not force a view to re-fetch what the previous screen was already displaying.
  4. Is heavy work explicitly off MainActor? - nonisolated on CPU-intensive methods.
  5. Do interactive animations use .spring()? - .easeInOut for automatic transitions, .spring() for direct user interaction.
  6. Is initialization lazy? - TabView, heavy views, services - load on demand.
  7. Are .onChange handlers free of state mutation? - Derive state from one .task(id:) with a composite token instead of chaining reactive callbacks.

Summary

Instruments is an essential tool - but it measures only one half of the equation. The other half is the psychology of perception: active vs passive waiting, immediate feedback vs silence, progressive content reveal vs abrupt state jumps.

The best iOS apps are not necessarily the fastest in benchmarks. They are the fastest in the user’s perception. And that is something an architect or tech lead can design systemically - not as something tacked on at the end during code review, but as a consciously designed part of UX, present from the first view to the last modal.

The next time QA reports that “the screen stutters,” before you open Time Profiler, tap that screen yourself and feel what the user feels. Sometimes the answer is not in the stack trace, but in those 200 milliseconds of silence.

Rafał Dubiel

Rafał Dubiel

Senior iOS Developer with 8+ years of experience building mobile applications. Passionate about clean architecture, Swift, and sharing knowledge with the community.

Share this article: