The state-of-the-art (SOTA) transformer tracker TransT achieves high tracking accuracy. Nevertheless, the time and space complexity of its attention operation is quadratic to the spatial dimension of feature vectors. Thus it is difficult to deploy TransT on resource constrained devices. This paper proposes Local Information Patch Attention Free Transformer (LIP-AFT) based Local Information Patch Self-Attention Free Transformer (LIPS-AFT) and Local Information Patch Cross-Attention Free Transformer (LIPC-AFT) for linear time and space complexity and high accuracy. LIP-AFT benefits from global connectivity between patches while it focuses on naïve strong local attention patterns. The proposed tracker outperforms both SOTA trackers and TransT with various SOTA attention algorithms on accuracy and complexity. Moreover, its inference phase runs at 41 fps on RTX 2070S GPUs.