FUITextRecognitionView

open class FUITextRecognitionView : UIView

The FUITextRecognitionView can be used to perform text recognition using device camera. Video captured from device camera would be shown in in entire bounds. Use captureMaskPath to define the sub-frame of the view, in which the text would be recognized. Rest of the frame would be dimmed by overlayView. This default behavior can be changed by setting view and layer properties of overlayView. All the observations, that is, recognized text blocks and corresponding bounding boxes, would be returned back to observationHandler. To provide hint to user to what texts are recognized, use showTexts to show the observations on top of captured video.

Call startSession to start capturing the video. This is normally called when view appears. Since text recognition model are memory and cpu intensive, it won’t be run until user has stabilized the device camera. After that observationHandler would be called for every video frame with observations. observationHandler can be used to perform any validations and filtering on the observations. Validate if it is a credit card number if observations have 16 numeric digits in total and discard any other observations, for example. Once the filtered observations meet the desired criteria, call stopSession to stop capturing the video. Make sure to call stopSession in case view disappears.

Example Initialization and Configuration


let recognitionView = FUITextRecognitionView()

recognitionView.observationHandler = { observations

   let filteredObservations = <#filter out unwanted text#>

   //show text over captured video to provide feedback to user to what is scanned
   recognitionView.showTexts(for: filteredObservations)

   let areObservationsValid = <#filteredObservations meets desired criteria#>

   if areObservationsValid {
       DispatchQueue.main.async {
           //place captured text in text field
           textField.text = filteredObservations.map { $0.value }.joined(separator:" ")
       }
       //on return true, session would be automatically stoped and observationHandler would no longer be called
       return true
   }
   return false
}

//start session to capture frames from back camera
recognitionView.startSession()

*

  • Provide custom requests which would be performed on the captured video frames which are cropped based on captureMaskPath.

    let mlmodel = ...
    let vnmodel = try? VNCoreMLModel(for: mlmodel)
    let request = VNCoreMLRequest(model: vnmodel!, completionHandler: self.detectTextHandler)
    
    recognitionView.requests = [request]
    

    Use the completionHandler in VNCoreMLRequest to get the observations.

    If no custom requests are set, view would use default text recognition requests. In this case supply, observationHandler to get the observations.

    Declaration

    Swift

    open var requests: [VNImageBasedRequest] { get set }
  • Called when using default requests and no custom requests are set. This called on every captured video frame after startSession called and device camera is stable and until stopSession is called or true is returned.

    Declaration

    Swift

    open var observationHandler: (([RecognizeTextObservation]) -> Bool)?
  • Captured video inside the bezierPath’s bound would be used to detect text. This would be translated to center of the view.

    Declaration

    Swift

    open var captureMaskPath: UIBezierPath
  • Change to customize the quality level or bitrate of the captured video

    Declaration

    Swift

    open var sessionPreset: AVCaptureSession.Preset { get set }
  • Dims the complete view excluding centered captureMaskPath.

    Declaration

    Swift

    open var overlayView: OverlayView { get }
  • Supply subset of observations to render the text over the captured video. Supply empty array to clear any rendered text.

    Declaration

    Swift

    open func showTexts(for observations: [RecognizeTextObservation], with checkingResults: [NSTextCheckingResult]?, attributes: [NSAttributedString.Key: Any] = [:], scaleToFit: Bool = true)

    Parameters

    for

    Subset of observations from observationHandler

    with

    Only observations which are part of checking results are used to render text

    attributes

    Attributed applied to the rendered text

    scaleToFit

    Appropriately chooses the font size to completely overlay the rendered text on top of text in captured video when set to true, ignoring font sized supplied in attributes

  • Start capturing the video from device camera. However, requests are not performed until after camera stabilizes.

    Declaration

    Swift

    public func startSession()
  • Stop capturing the video from device camera.

    Declaration

    Swift

    public func stopSession()
  • Dims the complete FUITextRecognitionView excluding centered captureMaskPath. Since this is backed by CAShapeLayer, strokeColor can be set to change the color of the captureMaskPath.

    Declaration

    Swift

    final public class OverlayView : UIView